SciELO - Scientific Electronic Library Online

 
vol.28 issue1Two-commodity perishable inventory system with bulk demand for one commodityQuality performance: The case of construction projects in the electricity industry in Kenya author indexsubject indexarticles search
Home Pagealphabetic serial listing  

Services on Demand

Article

Indicators

Related links

  • On index processCited by Google
  • On index processSimilars in Google

Share


South African Journal of Industrial Engineering

On-line version ISSN 2224-7890
Print version ISSN 1012-277X

S. Afr. J. Ind. Eng. vol.28 n.1 Pretoria May. 2017

http://dx.doi.org/10.7166/28-1-1512 

GENERAL ARTICLES

 

Exploring variability among quality management system auditors when rating the severity of audit findings at a nuclear power plant

 

 

R.C. Simons*, # ; A. Bester; M. Moll

Department of Industrial and Systems Engineering at the Cape Peninsula University of Technology, South Africa

 

 


ABSTRACT

A reliable quality assurance (QA) function in the nuclear environment is underpinned by the effective identification of risk, and by effective decision-making processes in relation to the risk identified. The need for competent auditors who are able to remain objective and independent at all times forms a critical component of this process. This exploratory study sought to determine reasons for the noted inconsistency among auditors when rating the severity of audit findings, and to provide recommendations to reduce this variability. The Delphi technique, a structured process to gather information from a panel of experts, was adopted to enable multiple iterations of qualitative and quantitative data collection and analysis, in an attempt to mimic the elements of a sequential exploratory strategy related to a mixed method methodology.


OPSOMMING

'n Betroubare gehalteversekeringsfunskie in die kern omgewing word ondersteun deur die effektiewe identifikasie van risiko sowel as doeltreffende besluitnemingsprosesse met betrekking tot die geidentifiseerde risiko. Die behoefte aan bekwame ouditeure wat in staat is om objektief en onafhanklik te bly ten alle tye, is 'n kritieke komponent van hierdie proses. Hierdie verkennende studie het gepoog om die redes vir die bekende teenstrydigheid te bepaal onder ouditeure wanneer die erns van ouditbevindinge beoordeel word; en om aanbevelings om die bekende variasie te verminder te verskaf. Die Delphi-tegniek, 'n gestruktureerde proses vir die insameling van inligting uit 'n paneel van kundiges, is aangeneem om verskeie iterasies van kwalitatiewe en kwantitatiewe data-insameling en analise in staat te stel, in 'n poging om die elemente van 'n opeenvolgende verkennende strategie na te boots.


 

 

1 INTRODUCTION

Within the nuclear industry, where the failure of processes to conform to safety codes and standards can have catastrophic results, it is imperative that an organisation's quality assurance (QA) department can provide the necessary assurance of process compliance that is deemed necessary for the safe operation of a nuclear power plant. Key to providing this assurance is the performance of process audits that provide the platform for collecting and analysing critical information; formulating significant, reliable, and value-adding audit findings; and reporting all this key information at the most appropriate levels within an organisation, as claimed by Beckmerhagen, Berg, Karapetrovic, and Willborn.[1].

The significance of the research is noted in the area of controlling subjectivity among auditors when rating the severity of quality management system (QMS) audit findings. If the subjectivity can be controlled, and the levels of variability reduced, then the confidence placed in the QMS audit outcomes can be improved. Based on this increase in confidence in the QMS audit outcomes, senior management would potentially be more willing to assign resources to address significant QMS audit findings.

The research seeks to explore and describe how auditors rate the severity of audit findings; potentially identify reasons for inconsistencies among auditors; and provide recommendations to improve the level of consistency among auditors when they rate QMS audit findings.

 

2 LITERATURE REVIEW

2.1 Audit process performance

Beckmerhagen et al. [1] defined the QMS audit as a system used to achieve pre-determined objectives that might include identifying risks to operational and business processes. This is particularly critical in high-risk organisations where non-compliance is associated with unacceptable risks.

Contextualising the topic of audit effectiveness, a study by Duff [2], dealing with the quality of finance auditing, provides the four factor audit quality model (Figure 1). The model identifies elements, separated into service quality and technical quality that impact overall audit quality. The elements relate directly to people and processes and, by inference, when an organisation effects positive changes to one or all of these elements, the effectiveness of the audit process can also be enhanced, potentially improving the opportunity to identify risk to the processes. Colbert and Alderman [3] supported this view of Duff [2], and it is this model that will form the basis for further evaluation in subsequent sections. Service quality will initially be the focus of the discussion, followed by the technical quality of an audit.

2.2 Auditee perception

The dissection of auditee perception relating to audit effectiveness by Elliot, Dawson and Edwards [4], returned the view that audits can be seen in a negative light - not necessarily based on audit execution performance, but rather on perceptions held by auditees. This is evident when managers and auditees consider audits as mandatory exercises with very few or no positive benefits. Furthermore, when the various role players consider audit findings to be neither adding value nor being reliable, it remains a challenge to convince managers and auditees of the potential value of the audit process. In satisfying the auditee and ensuring customer satisfaction, the following aspects have been identified:

Execute audits seen as value-adding to the auditee.

Identify findings that are considered significant.

Formulate findings that give rise to effective acts of resolution (corrective, preventive, and improvement actions) that minimise recurrences of non-compliance.

Beecroft [5] supported the previous viewpoints, and found that management plays a crucial role in promoting the reputation of the QMS audit. However, this positive endorsement only occurs when managers themselves believe that audits offer benefits to the organisation. Similar sentiments were echoed by Robitaille [6], who highlighted the auditor's responsibility to promote the worth of audit activities by evaluating the findings and the associated risk; prioritising these findings; and effectively communicating these findings to the auditee/management. Continuing from this, Beckmerhagen et al. [1] warned that when too many nonconformities are raised or too many opportunities for improvement are noted, the possibility of success in fixing problems effectively is reduced, which might challenge managers to see the audit in a positive light.

Elliot et al. [4], citing Roth (2000), found that the value of the audit process improves through communication and marketing, and by gaining the auditee's support. Similarly, Rajendran and Devadasan [7] supported the notion of communication and marketing, and emphasised the need to determine the expectations of the audit customer by soliciting the necessary feedback from stakeholders.

2.3 Auditor role and performance

Robitaille [6] maintained that competent auditors are the most crucial element needed to execute effective audits. Supporting this, Elliot et al. [4] emphasised that when audits are performed by auditors who are unaware of the risk impact of a particular process on the organisation; who accept shortcomings in the scoping of audits; and who fail to consider previously raised nonconformities, this all impacts on the effectiveness of the audit's execution. Similarly, Mohamed and Habib [8], and Fadzil, Haron and Jantan [9] argued that audit quality hinges on fully independent auditors who are confident to share all aspects of the audit findings and process with the auditee.

Despite the positive endorsement in the literature, auditors are still perceived as the 'organisational watchdog', according to Romero [10]. However, numerous sources - including Vanasco [11]; Fadzil et al. [9]; Robitaille [6]; Deribe and Regasa [12]; and Rajendran and Devadasan [7] citing Beckmerhagen, Berg, Karapetrovic and Willborn. (2003) - continue to argue that internal auditors provide tangible benefits. These include:

Process monitoring and performance improvement;

Risk management; and

An advisory role to management.

Similarly, Keogh [13] perceived the role of the QA practitioner as vital in detecting defects and faults in business processes in order to correct and improve overall performance. Similarly, Robitaille [6] found that auditors fulfil a crucial role in business management; and associated with that role is a fundamental responsibility compelling auditors to provide unbiased information to the organisation's top management so that effective strategic and operational decisions can be made. Consequently, to execute effectively the "significant responsibility" referred to by Robitaille [6], auditors are required to possess specific skill sets, competencies, and mind-sets, including the attributes of auditor independence and objectivity.

2.4 Auditor independence

The Institute for Internal Auditors [14] provided the following definition of independence:

'Independence is the freedom from conditions that threaten the ability of the internal audit activity to carry out internal audit responsibilities in an unbiased manner... '

Law [15] linked auditor independence to auditor credibility, which potentially influences the audit outputs; the overall audit activity; the reputation of the auditor; and auditee perception. Auditor independence has been noted as fundamental to the auditing vocation, and is identifiable by the professional and ethical behaviour of an auditor when criticised by the auditees (Mohamed and Habib [8], citing Nichols and Price (1976); Lu (2005)). Mohamed and Habib [8], citing Cameran, Di Vincenzo, and Merlotti et al. (2005), also indicated that integrity, objectivity, and professional judgement all contribute to auditor independence - just as Karapetrovic and Willborn [16, 17] and Mohamed and Habib [8] indicated that the attributes required for auditor independence are strongly linked to the public perception of audit execution and auditor performance.

2.5 Auditor objectivity

Karapetrovic and Willborn [16] described the following link between independence and objectivity:

'Independence refers to both the auditor's organisational position and state of mind. Objectivity is related to the consistency of the auditing methodology, process and outputs and is being free from bias.'

Dissecting the concept of objectivity, Vanasco [11], citing The Institute for Internal Auditors (1964), highlighted an auditor's mental attitude and the concept of psychological bias as key aspects influencing auditor objectivity.

2.5.1 Auditor's cognitive ability and mental attitude

Even though auditors may not be able to change their organisational position or the existing organisational culture, they can achieve objectivity through a number of methodologies, according to Karapetrovic and Willborn [17]. By addressing an auditor's cognitive ability and mental attitude, auditor objectivity can be enhanced.

Cognitive functionality is broadly divided into two systems, each with its own associated attributes. The first system relates to thinking based more on emotions and intuitive reasoning; the second system is based more on logic and rational reasoning. When an auditor is aware of their predominant thinking style, they can guard against the pitfalls and bias related to the specific type of thinking. This can ultimately change an auditor's mental attitude and lead to enhanced auditor objectivity, as noted by Caputo [4], citing Stanovic and West (2000).

2.5.2 Auditor bias and its related influence

Linked to an auditor's mental attitude, Caputo [18] evaluated the effect of bias on decision-making processes, and highlighted twenty-one possible types of biases that influence the decisions that individuals make. The study also highlighted that, even though all individuals are affected by bias, individuals can counter the effect of the bias by understanding the type of bias that is present.

Associated with mitigating bias, Leveson [19], evaluating risk assessment in the area of aeronautics and astronautics, unpacked the topic of risk identification and the impact of bias. That study noted the influence of heuristic biases on decision-making processes and as part of risk identification. The types of biases noted in the study included:

Confirmation bias: Evident when individuals pay particular attention to information that will support an existing opinion of a particular individual or group.

Availability bias: Evident when individuals are more likely to raise concerns when previous data is readily available and is recalled by the individual.

Defensive bias: Also called defensive avoidance - a tendency to deny or rationalise certain difficult topics that might result in confrontation, conflict, and possible stressful situations.

In controlling bias and maintaining objectivity, Leveson [19] found that individuals are more likely to negate the effect of psychological influences and to identify significant risk when they are aware of their own biases. The study also promoted the use of a structured approach when identifying and assessing risk in order to minimise the effect of bias when making decisions. This recommendation of using a structured process speaks directly to the definition of 'objectivity' noted in the study by Karapetrovic and Willborn [16].

2.6 Audit findings

Robitaille [6] found that audit findings and the resolution of identified anomalies are considered the tangible outputs of the QMS audit process. These outputs require evaluation by both the auditee and the auditor to determine whether the audit activity has been successful.

Elliot et al. [4], citing both Beckmerhagen, Berg, Karapetrovic and Willborn. (2004) and Walleans (2000), highlighted specific challenges related to formulating audit findings. According to these authors, audit findings need to be valid and significant to justify the recording of such a finding and to warrant action to address the finding. Furthermore, the authors highlighted that when audit findings are considered petty and unimportant by the auditees, these negatively impact the effectiveness of the audit outcome.

Beckmerhagen et al. [1] clarify the concepts of valid and significant audit findings respectively as follows:

'The findings are of sufficient importance and are confirmed without reasonable doubt.'

'Recognising and adequately analyzing the findings in connection with the audit objectives with an emphasis on risk management.'

Robitaille [6] added that valid audit findings lead to important risk identification or meaningful improvements, and could bring financial benefit or bring about positive changes as perceived by the auditee. Beckmerhagen et al. [1] highlighted the importance of significant findings, particularly when providing assurance of compliance with operational and safety standards in high-risk industries where resources must be assigned effectively and efficiently.

Besides the need for significant audit findings, the need for reliable audit findings has also been noted. Elliot et al. [4] emphasised that the aspect of reliability is dependent on auditor performance; perceived auditor competence; auditee perception; and the specific audit findings being raised. Therefore, the ability of the audit finding to add value to the organisation's performance, and the ability of the auditor effectively to identify risk during the audit process, are both aspects that contribute to the definition of reliability in the context of the audit activity and specifically of the audit finding.

2.7 Management of identified risk

According to the Institute of Internal Auditors [20]:

'Auditors should have a means of measuring or judging the results and impact of matters identified on an audit.'

Similarly, Robitaille [6] shared the importance of reporting effectively on the audit outcome, and highlighted the challenges auditors experience as part of reporting the audit result, especially when specific risks are not readily quantified or suitably conveyed.

2.7.1 Quantifying risk

Overcoming challenges related to measuring or quantifying, Hubbard [21] shows that, in order to measure any concept effectively, some key elements are required. The first is to understand the purpose of a measurement - whether to support decisions, reduce uncertainties, or reap certain benefits. Apart from the purpose, a clear definition of the concept being measured, and identifying the specific indicators that reflect the presence of that concept, are also needed. Only once all these elements have been identified and understood can the amount of energy and effort needed as part the measuring process be determined.

Gehman, Lefsrud and Lounsbury [22] argued that the concept of risk has become of paramount interest to any organisation that aims to improve its business performance and achieve business sustainability. In this context, the relevance of audit findings in the management of risk and business performance can assist an organisation on various levels. Tummala and Leung [23] found that identifying risk and related uncertainties as part of an audit process is primarily to influence decision-making processes, which could lead to achieving business goals and objectives, and result in improved business performance.

Related to the auditing process and risk evaluation, Robitaille [6] found that, irrespective of the methodology employed to identify the risks, merely identifying a nonconformity as part of the audit process might not be enough to convince management of the risk inherent in a certain process, or of the decisions and actions that are needed to resolve anomalies. Further evaluation by the auditor/audit team is therefore required. In resolving anomalies, Kendrick [24] found that, besides identifying and understanding risk, appropriate responses by the relevant stakeholders are vital. To support appropriate action, auditors are usually required to provide an opinion related to the perception of risk, in order to influence management decisions and actions that can steer organisations in a particular direction.

2.7.2 Evaluation of risk

In support of Kendrick [24], Robitaille [6] recommended that auditors seek assistance from auditing colleagues when analysing nonconformities so that they better understand the effect, significance, and risk profiling of the nonconformity within the QMS audit environment. Auditor bias could possibly be mitigated at the same time.

As part of prioritising actions and providing the advisory input to managing risk, the Institute of Internal Auditors [20] recommends using a rating process in conjunction with a formal criteria framework and associated methodology to evaluate these risks. When developing and implementing a grading system, criteria framework, and methodology, a number of aspects require consideration. These include:

Determining the purpose of the assurance provided.

Identifying the level at which assurance is provided to an organisation.

Obtaining stakeholder concurrence on criteria to be adopted to ensure stakeholder endorsement once audit findings are evaluated and reported.

Determining whether the proposed rating criteria will satisfy the unique business requirements.

Furthermore, to develop sustainable criteria, the specific framework should consider current and future business needs that should encourage consistent application of the criteria, and could ultimately improve the credibility of the auditing organisation. Specific attributes for the adopted criteria are proposed by the Institute of Internal Auditors [20], and were also re-iterated by the International Atomic Energy Agency [25]. These attributes include:

Relevance to the organisation.

Reliability - being able to provide accurate data.

Neutrality - thus able to eliminate bias and subjectivity.

Understood by all parties/stakeholders, and considered as value-adding by all.

Completeness - considering all viewpoints to provide a holistic evaluation of the audit findings.

 

3 RESEARCH ENVIRONMENT

The QA department performs process audits to provide the assurance that processes at a nuclear power plant are established, maintained, and implemented in a way that ensures the prevention of a nuclear or radiation incident or accident. The outcome of these audits is interrogated by various levels of management within the organisation, and by multiple stakeholders external to the organisation, including the National Nuclear Regulator (NNR).

The QA department, which currently has twelve auditors with varying technical backgrounds, has encountered increased variability in audit activity outcomes. The cause of such variability may be due to shortcomings in the current methodology used to rate the severity of both the audit activities and the audit findings. Since audit findings form the building blocks for rating the overall audit activity, the study focused on factors that might influence consistency (repeatability) among auditors when rating the severity of audit findings.

 

4 RESEARCH DESIGN AND METHODOLOGY

The aim of this study was to explore the practice among auditors when rating the severity of audit findings, to identify reasons for inconsistencies and provide recommendations to improve consistency. The Delphi technique, which is a structured process to gather information from a panel of experts with elements of a sequential exploratory strategy, was deemed a suitable research method. The Delphi technique is distinguishable by its guarantee of anonymity and its use of controlled feedback, and can be adopted when a realistic reflection of a complex situation is required, or when participation and contributions from individuals to resolve the problem are required. The method includes the collection, analysis, and interpretation of the qualitative and quantitative data to corroborate findings. Multiple sources, including Turoff and Linstone [26]; Skulmoski, Hartman and Krahn [27]; Hsu and Sandford [28]; Inaki, Landín and Fa [29]; Paliwoda [30]; along with Vakani and Sheerani [31] support this choice of methodology.

The empirical phase of this study included the collection, analysis, and interpretation of data using the qualitative data analysis (QDA) framework suggested by Baptiste [32]. This included:

Defining analysis: Recognising which data would be required to achieve the research goals.

Classifying data: The emphasis is placed on tagging data and grouping the tagged data items.

Making data connections: The linking of information creates the necessary context of the study, and provides a holistic view that is important in establishing insights.

Related meanings: Conveying or reporting the significance of the data collected.

Furthermore, a sequential exploratory strategy (mixed method strategy) aims to explore a phenomenon through an initial qualitative data collection and analysis, followed by a quantitative data collection and analysis. The quantitative data is used to support the qualitative data, making the qualitative findings easier to defend and more acceptable to critics, according to Creswell [33, 34]. The strategy was adopted where applicable.

 

5 DATA COLLECTION AND ANALYSIS

The research participants represented 60 per cent of the auditor population in the in the Quality Assurance department.

5.1 The first round of the Delphi evaluation

The Delphi evaluation investigated the following areas:

Key area 1 - Q1: "Why do QA auditors rate/grade audit findings?"

Key area 2 - Q2: "What elements affect the objectivity of an auditor?"

Key area 3 - Q3: "What elements contribute to auditor/audit team variability?"

5.1.1 Data collection

The primary data source was the responses to the questions noted in section 5.1 (Q1 to Q3). The responses were collected, tagged, quantified (percentage occurrence), and represented in the associated figures.

Key area 1: "Why do QA auditors rate/grade audit findings?"

The occurrence of responses associated with key area 1 is depicted in Figure 2. It is noted that 'significant' with respect to audit findings denotes a finding that is able to identify risk, according to Beckmerhagen et al. [1], and denotes a finding perceived as value-adding by the auditee. It was observed that all participants held similar opinions about the purpose of rating audit findings.

Key area 2: "What elements affect the objectivity of an auditor?"

The occurrence of responses associated with key area 2 is depicted in Figure 3. Robitaille [6] found that team dynamics can assist auditors to attain objectivity; they also support the practice where auditors seek assistance from auditing colleagues when analysing nonconformities. However, the category of 'Audit team dynamics' only scored 20 per cent during the survey.

 

 

Leveson [19] noted that there is a strong correlation between an auditor's mindset and existing biases. Being aware of these particular mindsets and heuristic biases could assist an auditor to counter the negative effects of bias and thus maintain objectivity. Sixty per cent of the participants identified an individual's objectivity/mindset as an input to overall auditor objectivity.

According to Karapetrovic and Willborn [16], 'Auditor independence' is related to auditor objectivity, an auditor's mindset, and the organisational position of the assurance function. This category was, however, one of the lowest-scoring in relation to auditor objectivity. Similarly, the category of 'Audit team dynamics' was one of the lowest-scoring categories.

Key area 3: "What elements contribute to auditor/audit team variability?"

The occurrence of responses associated with key area 3 is depicted in Figure 4. Related to variability among auditors and audit teams, 'Perceived competence and knowledge', 'Auditing methods', and 'Biased decisions' scored relatively high.

 

 

Even though 'Auditor qualification and experience' reflected a high score for key area 2 (Figure 3), a score of only 20 per cent was achieved for key area 3 (Figure 4). 'Unable to identify potential risk' and 'Planning' had similar scores to 'Auditor qualifications'.

5.2 The second round of the Delphi evaluation

The empirical data collected during the first round of the Delphi technique was evaluated, within the context of the relevant literature sources, to determine the specific quantitative data needed to examine the three key areas further. Once identified, the relevant quantitative data was collected using a survey consisting of three related statements for each key area.

The survey used the following statements:

Key area 1:

S1: Rating findings is an indication of risk. S2: Rating audit findings is for QA use only.

S3: The reason for rating audit findings is not well understood by auditees.

Key area 2:

S4: Audit team dynamics can affect auditor objectivity.

S5: QA organisational position can affect auditor objectivity.

S6: Individual auditor bias can affect auditor objectivity.

Key area 3:

S7: A rating methodology will enhance consistency among auditors.

S8: A four-level rating score will enhance consistency among auditors.

S9: Variability in rating findings is based on the current skills-set of auditors.

5.2.1 Data collection: Analysis and interpretation

The responses noted during the second round of the study (S1-S9) are quantitatively represented in Figures 5 to 13.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

5.3 Results and researcher's perspective related to the Delphi evaluation Key area 1 revisited

In response to the question: "Why do QA auditors rate/grade audit findings?", all participants identified the purpose of rating audit findings as including identifying risk or raising significant audit findings (Figure 2). These are both related to risk identification and risk management, according to

Beckmerhagen et al. [1].

Reviewing the data for the second round of the Delphi evaluation provided an unexpected result for S1 , depicted in Figure 5. The opinion noted in round 1, that the rating of an audit finding was to indicate risk, was not unanimously held among the participants.

The responses noted for S2 indicated that the majority of participants disagreed that the information related to audit finding ratings was for QA use only (Figure 6). By inference, rating information could therefore be used by, and be valuable to, additional stakeholders, including the auditee.

Conversely, the participants also noted that the current rating system was not well understood by auditees (Figure 7), making it difficult to solicit auditee support for the findings if they do not primarily understand the purpose of the rating process.

Key area 2 revisited

In response to the question: "What elements affect the objectivity of an auditor?", consensus among the participants was observed in the following areas: 'Planning'; 'Auditor qualification', 'Experience, knowledge and perceived auditor competence' (Figure 3). This result was unexpected, as the literature review provided limited indications that these elements would potentially impact auditor objectivity.

A smaller proportion of participants initially acknowledged 'Auditor independence' and 'Audit team dynamics' as elements that could affect auditor objectivity (Figure 3). This result was unexpected, as the relevant literature supports the use of team moderation to enhance objectivity (Robitaille [6]). The participants, however, acknowledged the value of audit team members in achieving and enhancing objectivity, by relying on the background and perceived competence of fellow team members. S4 evaluated the element of audit team dynamics impacting objectivity. The result depicted in Figure 8 indicates that participants supported the statement; however, the perception was not unanimous.

During the first round of the Delphi evaluation of key area 2, the following elements were not identified by the participants: 'Organisational position' and the use of 'Applied methodologies'. These elements have been noted in the literature (Karapetrovic and Willborn [16]) as impacting on decision-making processes, individual bias, and subsequent objectivity. In round 1, a smaller proportion of participants initially acknowledged 'Auditor independence' - which potentially relates to organisational position - as an element affecting auditor objectivity (Figure 3).

To determine the participants' perception about 'Organisational position' and 'Individual auditor bias' as related to auditor objectivity, S5 and S6 were used to collect the relevant data. The result indicated consensus among participants, with 100 per cent of the participants agreeing that organisational position and individual auditor bias impacted the levels of auditor objectivity (Figures 9 and 10).

Key area 3 revisited

As part of evaluating the question, "What elements contribute to auditor/audit team variability?", the following categories scored significantly lower among all the categories identified during round 1: 'Auditor qualification and experience', 'Unable to identify potential risk', and 'Planning'.

In assessing the 'Planning' category, Beckmerhagen et al. [1] regarded planning as imperative for the execution of effective audits. However, they did not identify planning as critical to auditor variability. For this reason, further investigation in this area was not performed.

The highest scoring categories were 'Perceived auditor competence', 'Auditing methods', and 'Biased decisions' (Figure 4). This result was unexpected, since only the first category was identified in the evaluation of auditor objectivity (Figure 3), while the latter two categories were not identified by any participants, leading the researcher to infer that participants might not have recognised that auditor objectivity and auditor consistency (and, conversely, variability) were related, as noted by Karapetrovic and Willborn [16].

Furthermore, if these concepts are regarded as interdependent, and since objectivity is associated with the application of a consistent auditing methodology, the value of such a key element cannot be disregarded when trying to reduce variability. Thus questions arose about the consistent application of the existing methodologies in the research environment; and this led in turn to further evaluation of the auditing methods and the associated rating methodologies (S7-S9). The results are shown in Figures 11 to 13.

Figure 11 indicates agreement among the participants that a rating methodology could enhance consistency among auditors (S7). The consensus, however, was not unanimous, raising questions about the appropriateness of the current methodology as seen by the participants.

The existing methodology applied in the research environment is a three-level rating score. And since the aim was to determine opinions about the appropriateness of the current rating criteria, an alternative rating scale was proposed as part of the second statement for this focus area (S8). Neutral responses were observed when the use of a four-level rating score rather than the three-level score was proposed (Figure 12).

Since objectivity is associated with the consistent application of an auditing methodology, and a consistently applied methodology can reduce variability, it can be inferred that the skills required to apply such auditing methodologies could also impact the overall variability among auditors and audit teams.

Based on the contradictory data collected in round 1 (Figure 3 versus Figure 4) related to auditor competency, it was decided to evaluate the impact of an auditor's skills set (qualification, experience, competence, and knowledge) as part of reducing variability (S9). Subsequently, consensus was not reached about whether the auditor skills set could influence variability in rating the severity of audit findings (Figure 13).

The result noted in Figure 13 is inconsistent with the perception noted in Figure 10, where it was unanimously agreed that auditor bias could affect auditor objectivity, and could lead to variation among auditors and audit teams due to individual biases. It is also inconsistent with the literature reviewed in this area, which supports the findings shown in Figure 10 (Karapetrovic and Willborn [16]; Mohamed and Habib [8] citing Cameran, Di Vincenzo and Merlotti, (2005); Vanasco [11]).

5.4 Respondent feedback related to the Delphi evaluation

The data collected in the first and second rounds of the Delphi evaluation was tabulated and presented to the participants in order to determine their overall perception of the data collected. The key aspects of the feedback received from the participants were as follows:

Focus area 1: Participants questioned whether the rating of audit findings was of value to the auditee (Figure 5), based on the data highlighting the fact that not all auditees understood the purpose of rating audit findings (Figure 7).

Focus area 2: Respondent feedback highlighted the perception that team dynamics could have a negative impact on auditor objectivity (Figure 8), rather than the positive impact noted in the literature. Furthermore, participants concurred that auditor experience promotes a level of auditor objectivity (Figures 3 and 4). This is not aligned with the literature reviewed, which links objectivity to the implementation of consistent methods.

Focus area 3: Even though the participants concurred that a rating methodology could enhance consistency among auditors (reduce variability), opinions varied among them about whether applied auditing methods would enhance auditor objectivity (Figure 11).

5.5 Related meanings in relation to the Delphi evaluation

Key area 1

In reviewing the purpose of rating findings and reducing subjectivity among auditors, a number of significant points were noted.

Since rating an audit finding is a type of measurement, the measurement must be reliable and be of value. As previously mentioned by Hubbard [21], to ensure that a measurement of any sort or for any purpose is effective, certain elements are needed. It was determined that there was a common understanding among the participants about the purpose of rating an audit finding. However, further evaluation indicated some incompatibilities. Even though participants claimed the rating was for both QA and auditee use, the fact that auditees do not understand the purpose of rating audit findings challenges the validity, reliability, and value-add of the rating measurement to the auditee, at least in respect of the expectations noted by Hubbard [21].

Furthermore, if the intent of the measurement is not understood by the auditee, it can be inferred that the expectations related to actions associated with the various ratings might not be defined, understood, or effectively communicated. When expectations are not understood or communicated, dissatisfaction on the part of either or both parties might be experienced when these expectations are not met, resulting in a perception of ineffectiveness.

According to Elliot et al. [4], audit effectiveness seems to be as dependent on auditee perception as it is on audit execution and auditor competence. Thus the ability to influence and improve auditee perception could enhance auditor/auditee relations and add value to the organisation's performance as a whole. Even though all participants acknowledged that the purpose of the rating process was to identify significant findings and operational and business process risk, when the elements/indicators needed as part of the measurement are not well-defined, it becomes a challenge to perform the measurement effectively.

Key area 2 and 3

In determining the elements that affect auditor objectivity, the following salient points were noted. Robitaille [6] and Beckmerhagen et al. [1], recommend the use of peer-checking as a way to moderate and improve auditor objectivity. As part of the moderation process, it is important to note that nobody is immune to the effect of their own pre-conceptions; and therefore an individual's risk appetite and tolerance level to risk can influence auditor objectivity. Since heuristic biases might be unavoidable, the audit team moderation mentioned above could only mitigate biases to a certain extent, as individuals are able to skew decisions related to risk identification and risk analysis. In the current research environment, this could translate directly into how auditors identify and rate audit findings.

Even though 'auditing methods' and 'biased decisions' were noted as influencing variability among auditors, the omission of 'auditing methods' as an element that could impact auditor objectivity, along with the impact of negative audit team dynamics, could collectively impact on the level of objectivity exercised by an auditor and audit teams (Karapetrovic and Willborn [17]).

Based on the data collected, there is an over-reliance on individual capabilities, and not enough emphasis is placed on consistent decision-making processes. This was evident from the consistently high scores noted for the categories 'Auditor qualification and experience' and 'Perceived knowledge and competence'.

Furthermore, when the same participants are unable to agree on the value of implementing applied auditing methods to enhance auditor objectivity, it is probable that this very mindset impacts the application of consistent methods when formulating and evaluating audit findings. To test this view, a survey was completed to determine which elements were consistently considered, by all the participants, as part of the auditing process.

5.6 Input element survey

Since the participants might have had different worldviews, experiences, and biases, it is highly probable that the variation noted in the rating of audit findings stemmed from the inputs applied when formulating, justifying, and rating an audit finding. Each of these steps is key when communicating the significance of an audit finding, whether it is to provide a summary description, a context, or a measure of severity.

Before continuing, an understanding of the following key concepts, noted by Smith, Bester and Moll, [35] citing Ciardiello (2002), may be required:

Cause: The reason or reasons an event or finding has occurred.

Effect: An occurrence, problem, or event. Noted as the 'as found'.

Consequence: The actual or potential resultant or follow-on effect experienced, if the identified condition remains untreated.

Participants were surveyed to determine which elements (from the concepts noted above) were considered as inputs when formulating, justifying, and rating an audit finding. The three administered questions were:

"Which elements do you consider when: Q4 Formulating an audit finding description; Q5 Formulating an audit finding justification; Q6 Rating an audit finding?"

5.7 Data collection and analysis

The empirical data collected during this survey was grouped and quantified by applying the key concepts of cause, effect, and consequence. The percentage distribution of responses was captured in Figures 14-16.

 

 

 

 

 

 

Figure 14 indicates that, when participants formulated an audit finding description, 50 per cent of them primarily considered a category that combined both 'effect and 'consequence', while 30 per cent of them considered only 'effect' and 10 per cent of them considered 'cause' and 'cause and effect' respectively.

Figure 15 reflects the elements considered by participants when formulating an audit finding justification. Of the participants surveyed, 40 per cent considered a combination of 'effect and consequence' and only 'consequence' respectively; the remaining 20 per cent selected the category of 'cause, effect, and consequence'.

Figure 16 indicates that when participants rated an audit finding, 50 per cent of them considered a combination of 'cause, effect, and consequence'. while fewer of them, 40 per cent, considered 'effect and consequence'. and only 10 per cent considered only 'consequence'.

5.8 Results, and researcher's perspective on the input element survey

The prominent input elements identified for Q4 (Figure 14) are represented by the shaded area in Figure 17.

 

 

The majority of the participants considered a combination of 'effect and consequence' when formulating a finding description. However, since the finding description is a summary of the nonconforming condition, variation might arise due to how participants perceive the finding. This is supported by Smith et al. [35] citing Ciardiello (2002), as follows:

'...the nonconformity problem statement can be located dynamically within the 'cause and effect' chain. This introduces the difficulty that different auditors may position the same nonconformity effect or consequence in a different location on the chain.'

Practically, therefore, when more than one input - whether cause, effect, or consequence - is used to formulate a finding description, variation among participants might be unavoidable.

The prominent input elements identified for Q5 (Figure 15) are represented by the shaded area in Figure 18.

The majority of the participants considered 'effect and consequence' and 'consequence' when formulating a justification description. Since the justification description is considered as information that provides additional context to the audit finding, it stands to reason that 'consequences' would be a logical input to provide this required framework.

According to Robitaille [6], however, a certain level of subjectivity arises among auditors when analysing nonconformities in relation to effect, significance, and risk-profiling of the nonconformity; and so a level of variation might occur among the participants, based on individual biases and risk tolerance levels.

The prominent input elements identified for Q6 (Figure 17) are represented by the shaded area in Figure 19.

 

 

According to Smith et al. [35], when rating an audit finding, the potential effects or consequences of the audit finding are to be considered, while also keeping in mind the context in which the audit finding became evident (Smith et al. [35]). Reviewing the distribution of data collected for Q6 (Figure 16), a larger proportion of participants considered a combination of 'cause, effect, and consequence' while fewer of them considered 'effect and consequence', and the lowest number of them considered only 'consequence'.

Identifying the 'cause' element as part of the 'cause, effect, and consequence' category was an unexpected result, and is not supported by the relevant literature reviewed. In particular, Smith et al. [35] maintain that rating is to be based on the significance and consequence of the finding, and does not refer to 'cause' as a consideration when rating. Consequently, when all of the elements are considered by half of the participants - and when acknowledging the potential variation within the 'cause and effect' chain mentioned earlier - it is not surprising that variation among participants occurs when rating audit findings.

Likewise, the Institute of Internal Auditors [20] recommended bearing in mind the materiality (effect) and impact (consequence) of a finding when formulating and evaluating audit findings, rather than considering the cause of the finding.

5.9 Respondent feedback on the input element survey

As with the Delphi evaluation, participant feedback was solicited to determine participant opinion about the empirical data captured for this section of the study. The participants confirmed the validity of the data collected, and provided additional comment.

Although the cause of a nonconformity was not prescribed as part of the rating step, within the research environment participants indicated that they used this element as an input when rating an audit finding, as is evident in the feedback from one respondent:

'Very rarely would the cause be used as the basis, because this requires an analysis to find the cause. Sometimes the cause is clear; then it can be used.'

Although the participants indicated that the 'cause' of the audit finding was rarely considered, reviewing the data (Figure 16), it is reasonable to deduce that the 'cause' is considered by at least 50 per cent of the participants, thus increasing the likelihood of variation among participants when rating audit findings.

5.10 Related meanings in the input element survey

When both formulating and justifying an audit finding, similar input elements were considered by the majority of the participants (Figures 17 and 18). When reviewing Figures 17 and 18, it would be reasonable to deduce that the inputs were considered to the same extent. However, reviewing the specific data (Figures 14 and 15), the specific distribution of the elements considered indicates the extent of variation among participants in this area.

Similarly, when reviewing Figure 19, it is evident that aspects far beyond the effect of the finding (and possibly the immediate consequence too) have been considered when rating findings. When reviewing the specific data (Figure 16), the extent of the variation when rating might be greater than initially perceived, since the majority of the participants considered cause as an input, increasing the potential for variation and inconsistency among auditors when rating a finding.

Collectively, the variation observed across all aspects of formulating an audit finding may therefore result in inconsistencies among auditors that are particularly evident when rating the audit finding. Furthermore, when the prescription and methodology about which aspects to include and measure when formulating and rating an audit finding is absent; auditors might be required to apply their professional judgement to a greater extent, leading to subjective decisions.

 

6 RESEARCH FINDINGS

This exploratory study sought to determine reasons for the noted inconsistency among auditors when rating the severity of audit findings, by applying the qualitative data analysis framework suggested by Baptiste [32]. The following research findings were formulated:

There is a limited correlation between the perceived purpose of rating audit findings and the methodology/criteria currently adopted.

There is a disconnect between how participants regarded and achieved auditor objectivity and auditor consistency.

The potential benefit of audit team composition and team dynamics is not fully realised, and is skewed towards the negative influence of audit team dynamics.

The variation in input elements as part of the formulation, rating, and justification process has contributed to the variability among auditors.

 

7 RECOMMENDATIONS

Based on the key findings, the following analysis and recommendations were noted:

Review the intent of the rating process, and specify the expectations of both the auditor and the auditee in this regard. Once the intent of the measurement (rating) is established and understood, determine which indicators/aspects will be measured. Revise the current rating criteria to consider and include all these inputs.

Establish an applied methodology with clear guidelines for rating audit findings, always keeping the purpose in mind. Guidelines should include: actions to mitigate individual auditor bias; actions to benefit from positive audit team moderation; actions to eliminate the over-reliance on auditor competency; identified aspects of risk deemed necessary as part of the rating process; and specified inputs to be used as part of formulating, rating, and justifying audit findings.

Improve auditor communication with both the auditee and senior management about the purpose of rating audit findings; and, if applicable, communicate to auditees the expectations about required actions in relation to the different severity grading categories.

 

8 CONCLUSION

The study has shown that inconsistencies among auditors arise when the purpose of rating audit findings and the applied methodology and criteria are not aligned. Additionally, these inconsistencies are magnified when there is an over-reliance on individual auditor capabilities rather than relying on specific auditing methods to mitigate auditor bias. So the study recommends that, by consistently applying an established framework of criteria and a prescribed methodology, QA organisations can control and potentially reduce subjectivity among auditors, and so improve the credibility of the QA function within a high risk industry.

The study claims that, when auditors consider various elements in formulating and rating an audit finding, and position the audit finding within the cause-and-effect chain with variation, this too increases the variability among auditors that can directly impact on the variability of audit outcomes. The study also reasons that, by understanding the importance of positioning the audit finding effectively within this cause-and-effect chain, auditors are effectively able to highlight the significance of the audit findings and the associated risk impact on the organisation. And highlighting the significance and the risk impact can result in effective acts of resolution that will improve confidence in the QA audit outcomes and improve the credibility of the QA function.

 

REFERENCES

[1] Beckmerhagen, I.A., Berg, H.P., Karapetrovic, S.V. & Willborn, W.O. 2004. Case study on the effectiveness of the quality management system audits, The TQM Magazine, 16(1), pp. 14-25.         [ Links ]

[2] Duff, A. 2009. Measuring audit quality in an era of change: An empirical investigation of UK audit market stakeholders in 2002 and 2005, Managerial Auditing Journal, 24(5), pp. 400-422.         [ Links ]

[3] Colbert, J.L. & Alderman, W.C. 1995. A risk-driven approach to the internal audit, Managerial Auditing Journal, 10(2), pp. 38-44.         [ Links ]

[4] Elliot, M., Dawson, R. & Edwards, J. 2007. An improved process model for internal auditing, Managerial Auditing Journal, 22(6), pp. 552-565.         [ Links ]

[5] Beecroft, G.D. 1996. Internal quality audits - Obstacles or opportunities?, Training for Quality, 4(3), pp. 32-34.         [ Links ]

[6] Robitaille, D. 2014. 9 Keys to successful audits. 1st edition. Paton Professional.         [ Links ]

[7]Rajendran, M. & Devadasan, S.R. 2005. Quality audits: Their status, prowess and future focus, Managerial Auditing Journal, 20(4), pp. 364-382.         [ Links ]

[8] Mohamed, D.M. & Habib, M.H. 2013. Auditor independence, audit quality and the mandatory auditor rotation in Egypt, Education, Business and Society: Contemporary Middle Eastern, 6(2), pp. 116-144.         [ Links ]

[9] Fadzil, F.H., Haron, H. & Jantan, M. 2005. Internal auditing practices and internal control system, Managerial Auditing Journal, 20(8), pp. 844-866.         [ Links ]

[10] Romero, S. 2010. Auditor independence: Third party hiring and paying auditors. EuroMed Journal of Business, 5(3), pp. 298- 314.         [ Links ]

[11] Vanasco, R.R. 1996. Auditor independence: An international perspective, Managerial Auditing Journal, 11(9), pp. 4-48.         [ Links ]

[12] Deribe, W.J. & Regasa, D.G. 2014. Factors determining internal audit quality: Empirical evidence from Ethiopian commercial banks, Research Journal of Finance and Accounting, 5(23), pp. 86-94.         [ Links ]

[13] Keogh, W. 1994. The role of the quality assurance professional in determining quality costs, Managerial Auditing Journal, 9(4), pp. 23-32.         [ Links ]

[14] The Institute of Internal Auditors. n.d. [Online] Available from: https://na.theiia.org/standards-guidance/topics/Pages/Independence-and-Objectivity.aspx Accessed 08/04/2015        [ Links ]

[15] Law, P. 2008. An empirical comparison of non-Big 4 and Big 4 auditors' perceptions of auditor independence, Managerial Auditing Journal, 23(9), pp. 917-934.         [ Links ]

[16] Karapetrovic, S. & Willborn, W. 2000. Quality assurance and effectiveness of audit systems, International Journal of Quality & Reliability Management, 17(6), pp. 679-703.         [ Links ]

[17] Karapetrovic, S. & Willborn, W. 2001. Audit and self-assessment in quality management: Comparison and compatibility, Managerial Auditing Journal, 16(6), pp. 366-377.         [ Links ]

[18] Caputo, A. 2013. A literature review of cognitive biases in negotiation processes, International Journal of Conflict Management, 24(4), pp. 374-398.         [ Links ]

[19] Leveson, N. n.d. A systems approach to risk management through leading safety indicators. [Online] Available from: http://sunnyday.mit.edu/B60D2502-F2D9-4699-850A-7D3F4C2A83BF/FinalDownload/DownloadId-6309856DED2638A40C36EC2B5EA97635/B60D2502-F2D9-4699-850A-7D3F4C2A83BF/papers/leading-indicators-final.pdf Accessed 16/10/2014        [ Links ]

[20] The Institute of Internal Auditors. 2009. Formulating and expressing internal audit opinions. [Online] Available from: https://www.theiia.org/chapters/pubdocs/87/OPINIONS_PRACTICE_GUIDE_FINAL.PDF Accessed 10/09/2015        [ Links ]

[21] Hubbard, D.W. 2010. How to measure anything. 2nd edition. New Jersey John Wiley & Sons, Inc.         [ Links ]

[22] Gehman, J., Lefsrud, L. & Lounsbury, M. 2014. Perspectives on risk: From techno-economic calculations to socio-cultural meanings. [Online] Available from: http://www.cspg.org/cspg/documents/Conference%20Website/Oil%20Sands /Session_F/F_Oral_4_Gehman_et_al.pdf [Accessed 25/08/2014].         [ Links ]

[23] Tummala, V.M.R. & Leung, Y.H. 1996. A risk management model to assess safety and reliability risks. International Journal of Quality & Reliability Management, 13 (8), pp. 53-62.         [ Links ]

[24] Kendrick, T. 2004. Strategic risk: Am I doing ok?, Corporate Governance, 4(4), pp. 69-77.         [ Links ]

[25] International Atomic Energy Agency. 2014. Use of a graded approach in the application of the management system requirements for facilities and activities. [Online] Available from: http://www-pub.iaea.org/MTCD/Publications/PDF/TE-1740_web.pdf Acessed 17/08/2015        [ Links ]

[26] Turoff & Linstone. 2002. The Delphi method: Techniques and applications. [Online] Available from: http://is.njit.edu/DE552453-5DD8-4EA1-8E9D-8946E79D30FA/FinalDownload/DownloadId-C93332C8C26348A4339A5289D4EE308D/DE552453-5DD8-4EA1-8E9D-8946E79D30FA/pubs/delphibook/delphibook.pdf. Accessed 19/08/2015        [ Links ]

[27] Skulmoski, G.J., Hartman, F.T. & Krahn, J. 2007. The Delphi method for graduate research, Journal of Information Technology Education, 6, pp. 1-21.         [ Links ]

[28] Hsu, C. & Sandford, B.A. 2007. The Delphi technique: Making sense of consensus. Practical Assessment, Research & Evaluation, 12(10), pp. 1-8.         [ Links ]

[29] Inaki, H.S., Landín, G.A. & Fa, M.C. 2006. A Delphi study on motivation for ISO 9000 and EFQM, International Journal of Quality & Reliability Management, 23(7), pp. 807-827.         [ Links ]

[30] Paliwoda, S.J. 1983. Predicting the future using Delphi, Management Decision, 21(1), pp. 31-38.         [ Links ]

[31] Vakani, F. & Sheerani, M. 2012. How to gain consensus from a group of non-experts: An educationist perspective on using the Delphi technique, Development and Learning in Organizations: An International Journal, 26(4), pp. 20-22.         [ Links ]

[32] Baptiste, I. 2001. Qualitative data analysis: Common phases, strategic differences, Qualitative Social Research, 2(3), September [Online]. Available from: http://www.qualitative-research.net/index.php/fqs/article/view/917/2003. Accessed23/06/2015        [ Links ]

[33] Creswell, J.W. 2003. Research design: Qualitative, quantitative, and mixed method approaches. 2nd edition. California Sage Publications.         [ Links ]

[34] Creswell, J.W. 2009. Research design: Qualitative, quantitative, and mixed method approaches. 3rd edition. California.Sage Publications.         [ Links ]

[35] Smith R., Bester, A. & Moll, M. 2014. Quantifying quality management system performance in order to improve business performance, South African Journal of Industrial Engineering, 25(2), pp. 75-95.         [ Links ]

 

 

Submitted by authors 22 Feb 2016
Accepted for publication 5 Apr 2017
Available online 26 May 2017

 

 

* Corresponding author rowena.simons@eskom.co.za
# The author was enrolled for an MTech Quality in the Department of Industrial and Systems Engineering at the Cape Peninsula University of Technology

Creative Commons License All the contents of this journal, except where otherwise noted, is licensed under a Creative Commons Attribution License