SciELO - Scientific Electronic Library Online

 
vol.30 issue3 author indexsubject indexarticles search
Home Pagealphabetic serial listing  

Services on Demand

Article

Indicators

Related links

  • On index processCited by Google
  • On index processSimilars in Google

Share


South African Journal of Industrial Engineering

On-line version ISSN 2224-7890
Print version ISSN 1012-277X

S. Afr. J. Ind. Eng. vol.30 n.3 Pretoria Nov. 2019

http://dx.doi.org/10.7166/30-3-2244 

SPECIAL EDITION

 

Towards Designing an Artefact Evaluation Strategy for Human Factors Engineering: A Lean Implementation Model Case Study

 

 

R. Coetzee*

School of Industrial Engineering, North-West University, South Africa

 

 


ABSTRACT

Applying scientific methods to the evaluation of design science research artefacts is necessary to recognise the design process as design science research. Prior work has reported on this crucial evaluation component; however, limited information and guidance are available on the practices that should be followed. In this study, the framework for evaluation in design science and the elaborated action design research method in the design science research paradigm were used to develop an alternative evaluation strategy for a human factors engineering artefact. A different scientific method was used to design an evaluation episode for each of the elaborated action design research iterations: (1) a gap analysis, (2) a systematic literature review, (3) an applied thematic analysis, (4) a design requirements traceability matrix, and (5) the Delphi technique. The validity of the research design was proven using design science research guidelines, action design research principles, and a research validation matrix. This evaluation strategy indicated how the strategic use of different kinds of scientific evaluations assisted in establishing the quality of the knowledge delivered by the design science process. This study has contributed to the field of (human factor) engineering by providing a pragmatic approach to solving abstract, people-related problems in industry.


OPSOMMING

Die toepas van wetenskaplike metodes vir die evaluering van ontwerp navorsing artefakte is noodsaaklik om die ontwerpsproses as 'n wetenskap te erken. Vorige werk in die verband is al publiseer, maar beperkte inligting en riglyne is beskikbaar. Hierdie artikel gebruik die riglyn vir evaluasie in ontwerpswetenskap en die uitgebreide aksie-ontwerp navorsingsmetode binne die ontwerpsnavorsing paradigma om 'n alternatiewe strategie te onwikkel waarmee 'n menslike faktore ingenieursartefak evalueer kan word. 'n Ander wetenskaplike metode is gebruik om 'n evaluasie episode te ontwerp vir elkeen van die uitgebreide aksie-ontwerp navorsingsmetode iterasies: (1) 'n gapingsanalise, (2) 'n sistematiese literatuurstudie, (3) 'n toegepaste tematiese analise, (4) 'n ontwerpvereiste naspeurbaarheidsmatriks en (5) die Delphi-tegniek. Die geldigheid van die navorsingsontwerp is bewys deur ontwerpswetenksap navorsingsriglyne te gebruik, aksie-ontwerp navorsingsbeginsels en 'n navorsingvalidasie matriks. Die evaluasie strategie dui aan hoe die strategiese gebruik van verskillende wetenskaplike evaluasies bygedra het tot die bepaling van die gehalte van die kennis wat die navorsingsontwerp proses gelewer het. Die studie dra by tot die veld van (menslike faktor) ingenieurswese deur n pragmatiese benadering tot die oplos van abstrakte, mense-verwante probleme in die industrie te verskaf.


 

 

1 INTRODUCTION

Production management, a specific field of industrial engineering, includes both technological and human elements [1]. Lean manufacturing is a management philosophy that is used to facilitate continuous improvement changes that is required in organisations. 'Lean' addresses both the technological and the human elements of such an organisational change, since it is built on the principles of continuous improvement and respect for people [1]. However, successful lean implementations are often hindered by a lack of understanding of the original meaning and intent associated with the human aspect of lean, resulting in lean tools being used without sufficient understanding of the human elements of change during a lean implementation.

The Handbook of Industrial Engineering [2] explains that the field of industrial engineering specialises in four basic areas: human factors engineering, manufacturing systems engineering, operations research, and management systems engineering (Figure 1). Each of these four speciality areas coincides with basic knowledge areas and/or application areas such as statistics, psychology, mathematics, information sciences, accounting, economics, and organisational behaviour.

 

 

The abovementioned lean implementation problem falls in the field of industrial engineering, with a focus on the bottom right side of Figure 1, encompassing the speciality areas of human factor engineering and management systems; and it is supported by the basic knowledge areas of psychology and organisational behaviour.

Given the need for a problem-solving research paradigm that facilitates the development of innovative artefacts, a design science research (DSR) paradigm was selected to address the research problem. DSR is also considered a form of research in which multiple stakeholders can collaborate to understand and address a problem [3], while seeking innovations that define the ideas, practices, technical capabilities, and products through which the analysis, design, implementation, management, and use of systems can be effectively and efficiently accomplished [4].

DSR differs from traditional research in that it focuses on learning through design - i.e., the construction of artefacts [5, 6]. An artefact is seen as a human-made object, or any object or process resulting from human activities. The word derives from the Latin words ars (skill) and facio (to make) [7]. However, such artefacts are not exempt from natural laws or behaviour theories. Their creation relies on existing kernel theories that are applied, tested, modified, and extended by the experience, creativity, intuition, and problem-solving capabilities of the researcher [4, 8, 9]. Design science research is a lens, or a set of synthetic and analytical techniques and perspectives (complementing positivist, interpretivist, and critical perspectives) for performing such research. Thus design science research consists of two primary activities: (1) the creation of new knowledge through the design of novel or innovative artefacts; and (2) the analysis of the artefacts' use and/or performance, with reflection and abstraction [5].

The DSR paradigm constitutes a series of rigorous activities involved in designing, evaluating, and communicating the artefacts used to solve organisational problems [4, 10]. The importance of evaluating a DSR artefact is crucial [4, 7, 9-15]. Together with 'build', evaluation is one of two key activities that constitute DSR [9]. Without sound evaluation, DSR must conclude by only theorising about the utility of design artefacts; in other words, it must claim that a new artefact is functional and relevant without any evidence that it actually is. Evaluation needs to be twofold [15]: (1) focusing on evaluating the artefact in the context of the utility it contributes to its environment (the relevance cycle of DSR [4]); and (2) regarding the design and the artefact in the context of the knowledge it contributes to the knowledge base (the rigour cycle of DSR [4]). This dual purpose of evaluation means that, if DSR is to live up to its label as 'science', the evaluation should be relevant, rigorous, and scientific [15]. Artefacts should be evaluated, based on the criteria derived from the requirements of the context in which the artefact will be implemented [1]. Evaluation therefore requires researchers rigorously to demonstrate the utility, quality, and efficacy of a design artefact using well-executed evaluation methods [4].

The literature on DSR identifies a variety of different evaluation methods [4, 10, 11, 16, 17], but provides little guidance on how (and why) to select appropriate methods, or to develop a strategy for what to evaluate, when, and how to conduct evaluation activities in DSR [12, 15]. Also, the cyclical nature of many design science processes may demand different evaluations at different stages of the process [15].

This lack of guidance on how to evaluate DSR artefacts could lead to DSR papers not being published in influential publications unless authors can make persuasive arguments that the artefacts were appropriately evaluated [12], since scientific methods for evaluating artefacts are necessary to recognise the design process as design science research [15].

Venable, Pries-Heje and Baskerville [15] asked the question of what a good way would be to guide the design of an appropriate strategy for conducting the various evaluation activities throughout a DSR project. They answered the question by developing the framework for evaluation in design science (FEDS), which can be used to support and guide DSR researchers (especially novice researchers) in the design of the evaluation component(s) of their DSR artefact [15]. The FEDS framework guides DSR research towards the development of a suitable evaluation strategy to match a specific DSR situation. The framework focuses on two key purposes of evaluation in DSR: (1) the utility of the artefact in the environment, and (2) the quality of the knowledge contribution of the construction of the artefact [15].

The aim of this paper is to use the FEDS framework to design an alternative evaluation strategy for artefacts that are developed in human factors engineering, by suggesting scientific methods to follow for each evaluation episode throughout the DSR cycle. A 'respect for people' (RFP) lean implementation model was used as a case study.

The next section provides background information of DSR and the FEDS framework. Section 3 elaborates on the research method followed for designing the artefact evaluation strategy, after which the proposed strategy is provided in Section 4. The validation is proven in Section 5, and the research is concluded in Section 6.

 

2 BACKGROUND

2.1 Design science research

The research paradigm for executing and evaluating design science research is presented in Figure 2, combining behaviour science and design science by using the following three inherent research cycles [3]:

Relevance cycle - bridges the contextual environment of the research project to the design science activities.

Rigour cycle - connects the design science activities with the knowledge base of the scientific foundations, experiences, and expertise that inform the research project.

Design cycle - iterates the core activities of building and evaluating the design artefact and processes of the research.

The sections that follow briefly explain each of the cycles.

2.1.1 The relevance cycle

The environment defines the problem space [18] in which the research question lies. The desire to improve this environment using new and innovative artefacts and processes is what drives DSR [18]. This application domain consists of people, organisational systems, and technical systems that interact with each other to achieve the goal.

Good design science often begins by identifying and representing opportunities and problems in an actual application environment [3].

The output of a DSR study should be returned to the relevance cycle for study and evaluation in the application domain (e.g., using field testing). These results will determine whether additional iterations of the relevance cycle will be required [3].

2.1.2 The rigour cycle

The rigour cycle brings past knowledge to the current research project to ensure its innovation. This knowledge takes the form of experiences and expertise that define the state-of-the-art in the application domain and in existing artefacts and processes. The researchers are required to research and reference the knowledge base thoroughly in order to prove that the designs are novel research contributions (as opposed to routine designs based on well-known processes) [3].

2.1.3 The design cycle

Hevner [3] points out that the internal design cycle is the heart of any DSR project. The cycle iterates between the construction of the artefact, its evaluation, and the feedback to refine the design further. As explained above, the relevance cycle provides the requirements, whereas the design and evaluation theories and methods are drawn from the rigour cycle. It is therefore important to understand the dependency of the design cycle on the other two cycles, while also realising its relative independence during the actual execution of the research.

Framework for evaluation in design science (FEDS)

The FEDS framework (Figure 3) guides DSR research towards the development of a suitable evaluation strategy to match a specific DSR situation. Evaluation would normally progress from the lower left corner - the state of no evaluation conducted - towards the upper right corner, representing a more comprehensive and rigorous (full and realistic) evaluation [15].

Each evaluation episode along the evaluation trajectory is defined in two dimensions: (1) the functional purpose of the evaluation (x-axis), and, (2) the paradigm of the evaluation study (y-axis).

2.1.3.1 Dimension 1: Functional purpose of the evaluation

Formative and summative evaluations can be considered as the ends of a continuum along which any evaluation might be located, as can be seen on the x-axis of the FEDS framework in Figure 3. Towards the formative end, evaluations must provide a basis for successful action. Towards the summative end, evaluations must create a consistent interpretation across shared meanings (sch as standards or requirements) [15].

When formative functions are paramount, meanings are validated by their consequences, and when summative functions are paramount, consequences are validated by meanings [ 19].

2.1.3.2 Dimension 2: Paradigm of the evaluation study

A DSR evaluation method has a paradigm - in a sense, similar to scientific paradigms such as positivism or interpretivism [15]. However, the prescriptive and functional nature of DSR needs a more practical and less philosophical approach. The FEDS framework uses the distinction between artificial evaluation and naturalistic evaluation for the Y-axis of the framework (Figure 3). Artificial evaluation may be empirical or non-empirical, and it is nearly always positivist and reductionist, being used to test hypotheses. Interpretive evaluations may also be used to attempt to understand better why an artefact works, or why it does not work. Artificial evaluation includes laboratory experiments, simulations, criteria-based analysis, theoretical arguments, and mathematical proofs.

On the other hand, a naturalistic evaluation explores the performance of a solution artefact in its real environment, typically in an organisation [15]. By performing evaluations in a real environment (real people, real systems, real settings), naturalistic evaluations embrace all of the complexities of human practice in real organisations. Naturalistic evaluation is always empirical, and tends towards interpretivism, but may be positivist and/or critical. These evaluation methods typically include case studies, field studies, field experiments, surveys, ethnography, phenomenology, hermeneutic methods, and action research. The dominant interpretive paradigm brings the benefits of stronger internal validation to naturalistic DSR evaluation [20].

2.1.3.3 Applying the FEDS framework

The chronological progression through formative evaluation to more summative evaluation represents the purpose of DSR - to consider rigorously the quality of the knowledge outcomes. Towards the end of the DSR process, the increasing use of more naturalistic evaluations improves the quality of the knowledge outcomes regarding the artefact's effectiveness in real use. As the artefact increases in quality, the risks become low enough for real use by real users [15].

While moving from the lower left corner to the upper right corner of Figure 3, different trajectories can be followed by conducting a number of evaluation episodes (specific evaluation activities of specific evaluands), using specific evaluation methods. This planned trajectory of evaluations, appropriate for the circumstances of a particular DSR project, is considered an evaluation strategy [15] .

 

3 RESEARCH METHODOLOGY

In order to develop an artefact evaluation strategy, Venable et al. [15] propose a four-step process:

1. explicate the goals of the evaluation;

2. choose the evaluation strategy or strategies;

3. determine the properties to evaluate; and

4. design the individual evaluation episodes(s).

The execution of these four steps for this study is explained in the sections that follow, while the result - the artefact evaluation strategy for human factor engineering - is provided in Section 4.

Step 1: Explicating the goals of the evaluation

There are different competing goals in designing the evaluation component of DSR that are more relevant at different stages of the research [15]. The goal of the evaluation strategy for this study was stated as:

Determine the utility/benefit of the lean implementation artefact, and rigorously establish that the utility/benefit will continue in real situations and over extended periods of time after the instantiation of the artefact.

Step 2: Choosing the evaluation strategy or strategies

On the basis of the goals of the evaluation, one or more strategies may be more appropriate for the evaluation [15]. The major risk for this study was social and user-orientated, since the design needs to solve a problem in an organisation. This risk, together with the stated goal, leads the evaluation strategy towards a human risk and effectiveness strategy [15] (refer to Figure 3).

The human risk and effectiveness trajectory/evaluation strategy emphasises formative evaluations (possibly artificial) early in the process, but progresses quickly to more naturalistic formative evaluations. At the end, naturalistic summative evaluations are suggested that focus on the rigorous evaluation of the effectiveness (utility/benefits) of the artefact [15]. Following this trajectory can contribute towards the artefact to accruing even when it is placed in operation in real organisational situations and in the long run, despite the complications of human and social difficulties of adoption and use [15].

Step 3: Determining the properties to evaluate

The next step in the strategy was to choose the general set of features, goals, and requirements of the artefact that were to be evaluated [15]. The following inputs were used:

lean philosophy [21-23] ;

the literature on the design of implementation frameworks [24-26]; and

ISO standard 9126 for software engineering [27].

Step 4: Designing the individual evaluation episodes(s)

Having chosen a strategy, and determining the design requirements of the artefact, each of the actual evaluation episodes (the stars in Figure 3) had to be designed [15]. A strategy was determined of how many evaluation episodes there would be, when particular evaluation episodes would be conducted, and in what way they would be conducted.

In the DSR paradigm, action design research (ADR) was considered to structure the different phases of the research, since this method addresses two challenges: (1) it addresses the problem situation encountered in the organisational setting; and (2) it constructs and evaluates an artefact that addresses the problem typified by the situation [13].

However, since the focus of this research was to design a new, innovative artefact (as opposed to evaluating an existing artefact), elaborated action design research (eADR) was required [28]. In such a case, where an artefact does not exist, an earlier point of entry is required where the researcher identifies the required theory, and verifies with practitioners the need for an innovative artefact [29]. Figure 4 provides the eADR method, showing the iterative process within and between stages, with entry points positioned appropriately along the innovative artefact design continuum [29].

 

4 RESULTS

Research design overview

Referring to the eADR research method in Figure 4, the research continuum was entered at the problem diagnosing stage, after which four iterative concept design stages were designed [28]. Although the eADR method specifies the intervention, evaluation, and learning steps for each stage, this study found that there should be a different emphasis on these steps throughout the problem diagnosing and concept design stages. Each of the iterative rounds should have a different 'sub'-problem to be solved, and a different artefact should be produced at the end of each iteration. Thus the problem and artefact steps were included in each concept design iteration.

Throughout the process, the DSR rigour cycle was continuously followed by obtaining input and feedback from the industry at different strategic points of the research process. Reflection was done after each iterative cycle by publishing the research (and corresponding artefact), and receiving feedback via the peer-review process.

The five iterative cycles, with the corresponding evaluation episodes that were designed and conducted, are shown in Figure 5, and are explained in the sections that follow.

 

Problem diagnosing: Gap analysis

The top half of Figure 5 indicates the problem diagnosing stage of the eADR method that was followed. The purpose of this stage was to investigate the nature of the research problem by identifying the required theory and verifying the practitioner's need for a new artefact.

4.1.1 Problem definition

The research problem for this iteration was stated as the low lean implementation success rate in South Africa, due to the intense focus on tools and techniques at the expense of the human element.

4.1.2 Evaluation

In order to evaluate the problem statement, a formative, artificial evaluation episode was conducted by reviewing five lean implementation strategies, and summarising them according to the themes that became evident. A summary of the 14 management principles [22] was used to perform a gap analysis by analysing the implementation strategies in terms of these lean management principles.

4.1.3 Learning

The results and learning that occurred in this stage were peer-reviewed and published in the South African Journal of Industrial Engineering [30].

Concept design 1: Systematic literature review

The first concept design iteration is visible in the bottom left corner of Figure 5. The aim of this stage was to investigate, report, and interpret the true, original meaning of the RFP principles, as intended by their creators.

4.1.4 Problem definition

The problem addressed by this stage was that the true, original meaning of RFP was not clearly defined in the literature, which could lead to misunderstanding when implementing lean in different cultures.

4.1.5 Evaluation

Again, a formative, artificial evaluation was conducted, using a systematic literature review (SLR) to determine the original meaning of the RFP principles, as intended by the creators. The review was planned by formulating the problem and research questions, followed by a comprehensive, unbiased search [31]. Studies to be included in the review were selected [32] and critically appraised using comprehensive reading and a detailed analysis. The key emerging themes were combined, integrated, and summarised into an RFP framework [31, 32] - an accessible and usable artefact in the real world of practice and policymakers [31]. To interpret the findings of the SLR in a pragmatic manner, the key emerging themes of the SLR (the RFP principles) were used to propose a conceptual RFP lean implementation framework.

4.1.6 Artefact design

During this stage of the eADR method, two conceptual artefacts were developed: (1) the RFP framework, and (2) the conceptual RFP lean implementation framework.

4.1.7 Learning

The conceptual artefacts and other learning that took place during this concept design stage were peer-reviewed and published in the International Journal of Lean Six Sigma [33].

Concept design 2: Applied thematic analysis

The second concept design iteration focused on determining the understanding and applicability of the RFP principles, specifically in the South African context.

4.1.8 Problem definition

The problem addressed by this stage was the limited research that had been done on the understanding and applicability of the Japanese RFP principles in the South African context.

4.1.9 Evaluation

Following the human risk and effectiveness trajectory in Figure 3, the evaluation episodes for this study started moving towards the naturalistic side of the research paradigm, although remaining formative in terms of functional purpose. An applied thematic analysis (APA) was conducted using an intervention with participants from the industry. The study was orientated to collect data, using exploratory discussions, that provided contextual information and contributed to understanding the specific phenomena [34]. The study had a constructionist paradigm, as meaning and experience are socially produced and reproduced [35]. A total of 31 individual, exploratory discussions were conducted with a panel of 22 participants.

The sampling technique for this qualitative study involved purposive, expert sampling with a relatively small sample size [36]. The goal was to describe the range of variability and not the distribution across a general population [37]. The inclusion criteria to participate in the study were known, with demonstrable experience and expertise in the area of lean implementation [36].

4.1.10 Artefact design

A thematic map of the South African interpretation of the RFP principles was developed and compared with the Japanese interpretation of the RFP principles.

4.1.11 Learning

This method of gathering data and their results were peer-reviewed and published in the South African Journal of Industrial Psychology [38].

Concept design 3: Design requirement traceability matrix

The design requirement developed in Step 3 of the FEDS framework had to be complemented with the information gathered during the previous (second and third) iterations in order to develop the design requirements for the people-centred lean implementation method.

4.1.12 Evaluation

The following were integrated into the design requirements:

the RFP themes identified during the second design iteration (Japanese RFP themes), and

the RFP themes identified during the third design iteration (South African RFP themes).

4.1.13 Artefact design

The above information was combined with the design requirement developed in Step 3 of the FED framework in order to develop a design requirements traceability matrix that could be used for the RFP lean implementation model.

Concept design 4: Delphi technique

The fourth and final concept design iteration (bottom right corner of Figure 2) was used to develop and evaluate the RFP model for lean implementation.

4.1.14 Problem definition

A people-centred model for lean implementation had not been developed for the South African context, and needed to be addressed by this stage of the eADR method.

4.1.15 Artefact design

The RFP model for lean implementation was designed, based on the design requirements traceability matrix from the previous eADR stage.

4.1.16 Evaluation

The evaluation of the new RFP method was done with the Delphi technique. This has its origins in the American business community, but has since been widely accepted throughout the world in other sectors, such as healthcare, defence, education, information technology, and engineering [39]. The method can be applied to problems that do not lend themselves to precise analytical techniques, but could rather benefit from the subjective judgement of individuals, on a collective basis, focusing their human intelligence on the problem statement [39, 40]. The technique was used to structure a group communication process so that the process effectively allowed a group of individuals, as a whole, to deal with the complex problem [40]. Questionnaires were designed to investigate agreement between the participants using a Likert scale [26]. It was an iterative process, using a series of these questionnaires interspersed with feedback [39, 40]. Each subsequent questionnaire was developed based on the results from the previous questionnaire. The process stopped when consensus was reached [39], where consensus was defined as an agreement between the experts in rating a particular item above 75 per cent, within a specific round [26]. Together with the Likert scale questions, participants were also given the opportunity to provide open-ended qualitative feedback.

The selection of the panel was done by purposively sampling experts. The participants were selected on the basis of their expert ability to answer the research questions, and not so that they could form a representative sample for statistical purposes [26, 39]. A heterogeneous group was formed by selecting (a) lean experts from academia; (b) lean experts from industry; and (c) human resource managers from industry.

An e-mail was sent to all participants, with a link to a video that explained the RFP method, and a link to the questionnaire. The e-mail requested them to watch the nine-and-half-minute video and then complete the 10-minute questionnaire. A reminder e-mail was sent if no response had been received after seven working days. After a further seven days, the results were analysed by combining the quantitative and qualitative feedback. An average of above 3.75 was achieved for all questions, thus reaching consensus after round 1 of the Delphi technique.

The Delphi technique validated the proposed artefact in terms of the initial design requirements, and confirmed that the artefact addressed the research problem of low success rate during lean implementation (in other words, whether the model was fit for purpose).

4.1.17 Learning

The RFP model, an explanation of the Delphi technique, and the learning that occurred throughout this stage was documented by Coetzee [41].

 

5 VALIDATION OF THE RESEARCH METHODOLOGY

The validity of the research design needed to be confirmed. Did the process contain sufficient control to ensure that the research outputs were warranted by the inputs? [42]. This was achieved using the following three methods (discussed in the section to follow):

1. the design science research guidelines;

2. the action design research principles; and

3. a research validation matrix.

5.1 Design science research guidelines

Design science research (DSR) is inherently a problem-solving process. Therefore, building and applying an artefact requires knowledge and understanding of a design problem and its solutions. In order to understand these requirements for effective DSR, Hevner et al. [4] developed seven guidelines to assist researchers, reviewers, editors, and readers. Table 1 states these requirements, and how they were addressed during this research study.

5.2 Action design research principles

Action design research (ADR) is a research method for generating prescriptive design knowledge by building and evaluating artefacts. It is founded in certain principles [13]. Table 2 states these principles, and how they were addressed during this study.

5.3 Research validation matrix

A third validation method was used to cross-validate adherence to a rigorous research design. A research validation matrix (Figure 6) was used to confirm that each research challenge was addressed by a research objective by applying one or more research design steps [43] [44].

Figure 6 shows how the research problem was divided into research challenges (the sub-problem in each eADR iteration), and how the research purpose was divided into the research solutions (the artefact designed during each iteration of the eADR process). The vertical columns show how each research challenge matched a research solution. These four vertical columns correspond to the four concept design iterations of the eADR research design that was followed. The top part of the validation matrix shows the information sources that were used to verify the research problem. The arrows indicate which sources contributed to which research challenges. Lean implementation literature, the Toyota way literature, the gap analysis, and the Delphi technique were used as input for this part of the study.

The middle part of the validation matrix indicates how the different literature focus areas contributed to verifying the problem statement and to solving the research challenges. The research methods were only used to address/develop the research solutions, while the literature on lean manufacturing and RFP verified the research problem and addressed the research solutions. The lean terminology and the Toyota way literature supported the first three research challenges and addressed all four research solutions. The literature on the barriers to lean implementation contributed to the development of the design requirements of the RFP model.

The bottom part of the validation matrix shows the research design that was followed. The arrows indicate which steps contributed to which research aims. The systematic literature review was used as formative, artificial evaluation to develop the RFP framework, explaining the true, original meaning of the Japanese RFP principles. The applied thematic analysis, a formative but naturalistic evaluation, was used to develop the thematic map of the South African and Japanese RFP themes. After the RFP model was designed and built, it was evaluated (summative and naturalistic) using the Delphi technique.

The above explanation of the research validation matrix (Figure 6) has proven that a rigorous research design method was designed and followed, by indicating that each research problem was addressed by a research solution.

 

6 CONCLUSION AND FUTURE RESEARCH

Applying scientific methods to the evaluation of DSR artefacts is necessary to recognise the design process as design science research [15]. Prior work has reported on the crucial evaluation component of DSR research [4, 7, 9-15]. However, these studies provided limited information and guidance on the desirable, acceptable, or customary practices that should be followed [12].

In this study, the FEDS framework and the eADR method within the DSR paradigm were used to develop an alternative evaluation strategy for an artefact that was developed in human factors engineering - a people-centred lean implementation method, to address the low success rate of lean implementation in South Africa. A different scientific method was used to design an evaluation episode for each of the eADR iterations within the DSR paradigm: (1) a gap analysis, (2) a systematic literature review, (3) an applied thematic analysis, (4) a design requirements traceability matrix, and (5) the Delphi technique. The validity of the research design was proven using DSR guidelines, ADR principles, and a research validation matrix.

This evaluation strategy indicated how the strategic use of different kinds of evaluation assisted in establishing the quality of the knowledge delivered by the design science process. The progress from formative and artificial evaluation towards summative and naturalistic added the required rigour. The relatively quick convergence of data during the final summative, naturalistic Delphi evaluation (after the first round) could be attributed to the fact that the relevance cycle was effectively incorporated in the study, especially during the exploratory interviews of the applied thematic analysis.

This study has contributed to the field of (human factor) engineering by providing a pragmatic approach to solving abstract, people-related problems in industry. Designing different scientific evaluation episodes during the combination of the relevance cycle (to include industry input) and the rigour cycle (to ensure scientific research) resulted in an effective artefact to address the industry problem.

The following limitations are acknowledged. The sample size of the applied thematic analysis and the Delphi technique could be considered limiting to the study. Future work should include larger sample sizes, and could also apply the DSR paradigm to other fields in engineering. Also, the FEDS evaluation strategy was only applied to the design of one (lean implementation) artefact. Further application and evaluation, specifically on a variety of artefacts, would provide further validation of the design.

 

REFERENCES

[1] Tsutsui, W.M. 1999. Manufacturing ideology: Scientific management in twentieth-century Japan. American Historical Review, 104(4), pp. 1278-1279.         [ Links ]

[2] Salvendy, G. 2001. Handbook of industrial engineering: Technology and operations management. USA: New York John Wiley & Sons.         [ Links ]

[3] Hevner, A.R. 2007. A three cycle view of design science research. Scandinavian Journal of Information Systems, 19(2), pp. 87-92.         [ Links ]

[4] Hevner, A.R., March, S. T., Park, J., Ram, S. 2004. Design science in information systems research. MIS Quarterly, 28(1), pp. 75-105.         [ Links ]

[5] Kuechler, B. & Vaishnavi, V. 2008. Theory development in design science research: Anatomy of a research project. In Proceedings of the Third International Conference on Design Science Research in Information Systems and Technology, Atlanta, Georgia.         [ Links ]

[6] Vries, M.D., Gerber, A. & van der Merwe, A. 2013. A framework for the identification of reusable processes. Enterprise Information Systems, 7(4), pp. 424-469.         [ Links ]

[7] Walls, J.G., Widmeyer, G.R. & Sawy, O.A.E. Assessing information system design theory in perspective: How useful was our 1992 initial rendition? Journal of Information Technology Theory and Application (JITTA), 6:2, 2004, 43-58.         [ Links ]

[8] Markus, M.L., Majchrzak, A. & Gasser, L. 2002. A design theory for systems that support emergent knowledge processes. MIS Quarterly, 26(3). pp. 179-212.         [ Links ]

[9] Gregor, S. & Jones, D. 2007. The anatomy of a design theory. Journal of the Association for Information Systems, 8(5), pp. 313-335.         [ Links ]

[10] Ken Peffers , Tuure Tuunanen , Marcus A. Rothenberger & Samir Chatterjee. 2014. A design science research methodology for information systems research. Journal of Management Information Systems, 24(3), pp. 45-77.         [ Links ]

[11] Nunamaker, J.F. Jr, Chen, M. & Purdin, T.D.M. 1990. Systems development in information systems research. Journal of Management Information Systems, 7(3), pp. 89-106.         [ Links ]

[12] Peffers, K., Rothenberger, M. and Kuechler, B. (Eds.):. 2012. Design science research evaluation. In Design science research in information systems: Advances in theory and practice. Berlin, Heidelberg: Springer Berlin Heidelberg.         [ Links ]

[13] Sein, M.K., Henfridsson, O., Purao, S., Rossi, M., Lindgren, R., . 2011. Action design research. MIS Quarterly, Vol 35 No. 1 pp. 37-56.         [ Links ]

[14] Walls, J.G., Widmeyer, G.R., El Sawy, O. A. 1992. Building an information system design theory for vigilant EIS. Information Systems Research, 3(1), 36-59.         [ Links ]

[15] Venable, J., Pries-Heje, J. & Baskerville, R. 2016. FEDS: A framework for evaluation in design science research. European Journal of Information Systems, 25(1), pp. 77-89.         [ Links ]

[16] March, S.T. & Smith, G.F. 1995. Design and natural science research on information technology. Decision Support Systems, 15(4), pp. 251-266.         [ Links ]

[17] Vaishnavi, V., Kuechler, W., and Petter, S. (Eds.) (2004/17). "Design Science Research in Information Systems" January 20, 2004 (created in 2004 and updated until 2015 by Vaishnavi, V. and Kuechler, W.); last updated (by Vaishnavi, V. and Petter, S.), December 20, 2017. URL: http://www.desrist.org/design-research-in-information-systems/.         [ Links ]

[18] Simon, H.A. 1995. The sciences of the artificial. London, England: MIT Press.         [ Links ]

[19] William, D. & Black, P. 1996. Meanings and consequences: A basis for distinguishing formative and summative functions of assessment. British Educational Research Journal, 22(5), p. 537. Portland.         [ Links ]

[20] Gummesson, E. 2000. Qualitative methods in management research. Thousand Oaks: Sage Publications.         [ Links ]

[21] Ohno, T. 1988. Toyota production system: Beyond large-scale production. Oregon: Productivity Press.         [ Links ]

[22] Jeffery, K.L. 2004. The Toyota way: 14 management principles. McGraw-Hill Education, New York.         [ Links ]

[23] Liker, J.F. & Hoseus, M. 2008. Toyota culture: The heart and soul of the Toyota Way. McGraw Hill, New York.         [ Links ]

[24] Deros, B.M., Yusof, S.M. & Salleh, A.M. 2006. A benchmarking implementation framework for automotive manufacturing SMEs. Benchmarking: An International Journal, 13(4), pp. 396-430.         [ Links ]

[25] Anand, G. & Kodali, R. 2010. Analysis of lean manufacturing frameworks. Journal of Advanced Manufacturing Systems, 9(1), pp. 1-30.         [ Links ]

[26] Nordin, N., Wahab, D.A., Deros, B.M., Rahman, M.N.A. 2012. Validation of lean manufacturing implementation framework using Delphi technique. Jurnal Teknologi, (Sciences & Engineering) 59 (2012) Suppl 2, 1 -6        [ Links ]

[27] Abran, A., Khelifi, A. & Suryn, W. 2003. Usability meanings and interpretations in ISO standards. Software Quality Journal, 11(1), pp. 325-338.         [ Links ]

[28] Mullarkey, M.T. & Hevner, A.R. 2018. An elaborated action design research process model. European Journal of Information Systems, 28(1), pp. 6-20.         [ Links ]

[29] Mullarkey M.T., Hevner A.R. (2015) Entering Action Design Research. In: Donnellan B., Helfert M., Kenneally J., VanderMeer D., Rothenberger M., Winter R. (eds) New Horizons in Design Science: Broadening the Research Agenda. DESRIST 2015. Lecture Notes in Computer Science, vol 9073. Springer, Cham        [ Links ]

[30] Coetzee, R., van der Merwe, K. & van Dyk, L. 2016. Lean implementation strategies: How are the Toyota way principles addressed? The South African Journal of Industrial Engineering, 27(3), pp. 79-91. https://doi.org/10.7166/27-3-1641.         [ Links ]

[31] Tranfield, D., Denyer, D. & Smart, P. 2003. Towards a methodology for developing evidence-informed management knowledge by means of systematic review. British Journal of Management, 14(3), pp. 207222.         [ Links ]

[32] Perestelo-Pérez, L. 2013. Standards on how to develop and report systematic reviews in psychology and health. International Journal of Clinical and Health Psychology, 13(1), pp. 49-57.         [ Links ]

[33] Coetzee, R., van Dyk, L. & van der Merwe, K.R. 2018. Towards addressing Respect for People during lean implementation. International Journal of Lean Six Sigma, Vol. 10 No. 3, 2019 pp. 830-854        [ Links ]

[34] Sanders, K., Cogin, J.A. & Bainbridge, H.T.J. 2014. Research methods for human resource management. New York: Routledge.         [ Links ]

[35] De Vos, A., Strydom, H., Fouche, C.B., Delport, CSL., 2011. Research at grass roots. Pretoria: Van Schaik Publishers.         [ Links ]

[36] Trochim, W.M.K. & Donnelly, J.P. 2008. The research methods knowledge base. Mason: Cengage Learning.         [ Links ]

[37] Guest, G., MacQueen, K. & Namey, E.E. 2012. Applied thematic analysis. Thousand Oaks: Sage.         [ Links ]

[38] Coetzee, R., Jonker, C., Van der Merwe, K., & Van Dyk, L. (2019). The South African perspective on the lean manufacturing Respect for People principles. SA Journal of Industrial Psychology/SA Tydskrif vir Bedryfsielkunde, 45(0), a1613.         [ Links ]

[39] Skulmoski, G.J., Hartman, F.T. & Krahn, J. 2007. The Delphi method for graduate research. Journal of Information Technology Education, 6, p. 1-21.         [ Links ]

[40] Linestone, H.A. & Turoff, M. 1975. The Delphi method: Techniques and applications. Addison-Wesley.         [ Links ]

[41] Coetzee, R. 2019. Development of the Respect for People model for lean implementation in the South African context. PhD thesis, North-West University.         [ Links ]

[42] Van Dyk, L. 2013. The development of a telemedicine maturity model. Stellenbosch University.         [ Links ]

[43] Holm, J.E.W. 2018. Quality research management (QRM), Interview,         [ Links ].

[44] Van der Merwe, G.P.R. 2014. A risk-based approach to the acquisition of electronic safety equipment for mines. PhD thesis, North-West University.         [ Links ]

 

 

* Corresponding author: Rojanette.Coetzee@nwu.ac.za

Creative Commons License All the contents of this journal, except where otherwise noted, is licensed under a Creative Commons Attribution License