SciELO - Scientific Electronic Library Online

 
vol.25 número3 índice de autoresíndice de materiabúsqueda de artículos
Home Pagelista alfabética de revistas  

Servicios Personalizados

Articulo

Indicadores

Links relacionados

  • En proceso de indezaciónCitado por Google
  • En proceso de indezaciónSimilares en Google

Compartir


South African Journal of Industrial Engineering

versión On-line ISSN 2224-7890
versión impresa ISSN 1012-277X

S. Afr. J. Ind. Eng. vol.25 no.3 Pretoria nov. 2014

 

GENERAL ARTICLES

 

Developing a tool for project contingency estimation in a large portfolio of construction projects

 

 

M. van NiekerkI; J. BekkerII, *

IEskom Holdings Ltd, South Africa. nelmt@eskom.co.za
IIDepartment of Industrial Engineering, Stellenbosch University, South Africa. jb2@sun.ac.za

 

 


ABSTRACT

To enable the management of project-related risk on a portfolio level in an owner organisation, project contingency estimation should be performed consistently and objectively. This article discusses the development of a contingency estimation tool for a large portfolio that contains similar construction projects. The purpose of developing this tool is to decrease the influence of subjectivity on contingency estimation methods throughout the project life cycle, thereby enabling consistent reflection on project risk at the portfolio level. Our research contribution is the delivery of a hybrid tool that incorporates both neural network modelling of systemic risks and expected value analysis of project-specific risks. The neural network is trained using historical project data, supported by data obtained from interviews with project managers. Expected value analysis is achieved in a risk register format employing a binomial distribution to estimate the number of risks expected. By following this approach, the contingency estimation tool can be used without expert knowledge of project risk management. In addition, this approach can provide contingency cost and duration output on a project level, and it contains both systemic and project-specific risks in a single tool.


OPSOMMING

Projek-gebeurlikheidsreserwes moet konsekwent en objektief beraam word ten einde die bestuur van projek-verwante risiko op portefeuljevlak in eienaar-organisasies moontlik te maak. Hierdie artikel bespreek die ontwikkeling van 'n gebeurlikheidsreserwe-beramer vir 'n portefeulje met baie konstruksieprojekte wat almal 'n soortgelyke aard het. Die doel met die ontwikkeling van hierdie funksie is om die invloed van subjektiwiteit op gebeurlikheidsreserwe-beramingmetodes deur die volledige projeklewensiklus te verminder en daardeur projekrisiko konsekwent op portefeuljevlak te weerspieël. Die navorsingsbydrae is die lewering van 'n hibriede funksie wat beide neurale netwerk modellering van sistemiese risiko en die verwagte waarde-ontleding van projek-spesifieke risikos insluit. Die neurale netwerk word geleer deur historiese projekdata te gebruik, tesame met ondersteunende data wat deur onderhoude met projekbestuurders verkry is. Die verwagte waarde-ontleding word deur 'n risiko-register formaat bewerkstellig, wat die aantal verwagte risikos met 'n binomiaalverdeling beraam. Deur hierdie benadering te volg kan die gebeurlikheidsreserwe-beramingfunksie gebruik word sonder diepte-kennis van risikobestuur in projekte. Dit verskaf gebeurlikheidkoste en -tydsduur beramings op projekvlak, en bevat beide sistemiese risiko en projek-spesifieke risiko in 'n enkele funksie.


 

 

1 INTRODUCTION

The effective management of project-related risk on a portfolio level is often restricted by contingency estimation methods that are not consistent and objective throughout the portfolio. This is especially detrimental in a large portfolio of projects, where a knowledgeable portfolio manager would need to maintain a portfolio-level risk analysis of all ongoing projects, so as to be able to monitor the risks and vulnerabilities of the entire portfolio. Construction projects comprising a portfolio are risky by nature, as many variables affect their outcome. It is therefore important that contingency cost and duration are allocated to the budget and duration of each project, in order to provide for the possible impact of risks.

Project risks not only include risks that could materialise due to project execution, but also risk conditions inherent to the project or the environment of the organisation. By this logic, project risks can be broadly classified into two categories: systemic and project-specific risks [1]:

'Systemic' refers to the fact that the risk is a product of the project 'system', culture, politics, business strategy, process system complexity, technology, and so forth.

'Project-specific' refers to the fact that the risk is specific to the project - for example, the possibility of rain on a specific project site during a certain time of year.

The link between systemic risks and the impact on cost and duration is stochastic in nature. This poses a challenge during risk identification, as teams find it difficult to understand and estimate the impact of systemic risks on particular cost items or schedule activities. Different project teams estimate the impact of these risks in different ways, leading to subjective results.

The aim of this paper is to discuss the development of a project contingency estimation tool for a large portfolio of similar construction projects. The requirements of this type of tool in such an environment are taken to be as follows:

it must be applicable at all levels of project definition to address the impact of both systemic and project-specific risks; and

it must be usable without expert knowledge of project risk management, as knowledge of this kind is often not readily available on smaller projects.

An effective contingency estimation tool should address the parameters driving systemic risk (systemic risk drivers) using empirical knowledge to produce stochastic models that link these drivers to bottom-line project cost or duration growth. The tool should also include a deterministic approach to the estimation of contingency requirements linked to project-specific risks [2]. The chosen approach is therefore a hybrid contingency estimation tool that incorporates both the artificial neural network (ANN) modelling of systemic risks and the expected value analysis of project-specific risks. Currently, there is no commercial contingency estimation tool that incorporates a hybrid approach to a neural network model of systemic risk analysis and expected value model for project-specific risk analysis. Furthermore, no academic paper has been published on the practical development of such a tool. This paper aims to address this gap by discussing the method employed to develop such a tool in the study environment.

The remaining sections of the paper are organised around the following topics:

1. Background of the study in terms of project risk, project contingency, and the study environment.

2. Methods used for contingency estimation:

Neural network modelling as achieved through a neural network that is trained using historical project data supported by interviews with project managers.

Expected value analysis as achieved in a risk register format employing a binomial distribution to estimate the number of risks that are expected to materialise.

Practical integration of the two methods.

3. Results:

Results of the interviews with project managers.

ANN model results.

4. Conclusions.

 

2 BACKGROUND

This section provides background on project risk, project contingency, and the study environment.

2.1 Project risk

Everyday life is full of intuitive decisions that are made without consciously attributing either quantitative or qualitative values to the risks involved. However, in some settings decisions need to be more objectively informed. A project is an example of such an environment.

The Project Management Body of Knowledge (PMBOK) [2] defines project risk as "an uncertain event or condition that, if it occurs, has an effect on at least one project objective (e.g., scope, schedule, cost and/or quality)."

Project risks not only include risks that could materialise due to project execution, but also risk conditions that are inherent to the environment of the project or organisation. For example, immature project management practices in an organisation would be a risk that applies to all projects in that organisation.

As mentioned before, project risks can be broadly classified into systemic risks and project-specific risks [4]. Systemic risks are stochastic in nature, and can also be called inherent risks [5]. In comparison with systemic risks, project-specific risks have a more deterministic link to their impact on cost and/or duration.

Hollmann et al. [6] state that, from an owner's perspective, systemic risks are especially important as they normally pertain to definitions, planning, technology, and decisions prevalent early in the project that cannot later be transferred to the contractors executing the project. Another important classification of risks involves the difference between internal and external risks [7]:

internal risks are those found within the project - they are often controllable risks;

external risks are generated outside the project, and often cannot be controlled.

Many other terms related to project risk could be discussed, but these are the ones most pertinent to the background of the contingency estimation study in question.

2.2 Project contingency

Adherence to project budget (cost) and schedule (duration) are two of the most important measures of project success. Estimations related to project budget and duration should therefore account for the presence of risks. The term 'contingency' refers to the quantification of an estimate with regard to project costs or duration, so as to cover some element of risk or uncertainty. Cost contingencies account for a probable increase in cost above target estimates, while duration contingencies account for a probable increase in duration above target estimates (i.e., the project will take longer than estimated).

Specialists have different opinions of the exact definition and application of project contingency. In this article, the term will be used as prescribed by Noor and Tichacek [8], according to whom the contingency should be sufficient to cover the cost or time required for the chosen risk response, whether this involves avoiding, transferring, mitigating, or bearing the realisation of risks.

2.3 Study environment

The contingency estimation tool discussed in this article was developed for a department in the distribution division of the Southern African power utility, Eskom Holdings SOC Ltd -specifically, the Project Execution Department of the Eskom Distribution Western Cape Operating Unit. The Department has a portfolio of more than 700 standard and repeatable network asset construction projects. Repeatable projects are those that are repeated regularly, such as the construction of small substations. Standard projects refer to those that are not repeated in their entirety, but do follow a standard process. These differ from mega-projects, which are large one-off projects, such as the construction of a power station. Mega-projects are not executed by the distribution environment.

Standard and repeatable network asset construction projects can be classified into one of four business categories:

1. Electrification: Projects that are initiated to provide electricity to an area that is not yet electrified.

2. Direct customer: Projects that are initiated due to a customer's application for a new service.

3. Strengthening: Projects that involve the expansion or upgrading of networks.

4. Refurbishment: Projects that involve modifications of an asset so as to extend its useful life (without upgrading the asset).

In addition to the business category, a project is classified in terms of its voltage category and its job category. Voltage category refers to the voltage level at which the work is executed (subtransmission, distribution, or reticulation), whereas the job category indicates whether the project involves a line, a cable, or a substation. Throughout the project management process in such an environment, there is no formal method for determining or allocating project contingencies, as is the case with many other established organisations [9]. In the past, contingency cost was often applied as a single percentage of base cost (most often between 5 and 15 per cent of the project cost), determined according to the previous experience of the project manager in question, on similar projects. No breakdown of this contingency percentage was required. At present, investment committees demand that the contingency cost requested alongside the project budget be fully broken down into an itemised list; but as far as duration is concerned, no estimate is required. No standard project risk analysis template exists, and no analysis of data concerning cost or duration growth has been conducted to serve as a guideline for quantifying contingency. Thus the accuracy of the estimated contingency impact is based solely on expert opinion.

The problem with this modus operandi is the assumption that all project managers are experts in their field, while in fact some are relatively inexperienced. Even if one were to assume that all project managers are indeed experts, the expert opinion method of contingency estimation is hampered by the fact that the wide variation in the skills, knowledge, and motivation of different individuals leads to subjectivity. This is evident from the fact that contingency estimates produced currently by project managers, for different projects under similar circumstances, vary widely.

The next section deals with the methods employed to estimate contingency in this study.

 

3 METHODS

To be able to address both systemic and project-specific risks, a hybrid contingency estimation method in line with the one suggested by Hollmann [10] is proposed. This method incorporates both a neural network and an expected value analysis tool. The logic behind this approach is as follows:

neural network modelling is used on data from past projects to evaluate the impact of systemic risks that are not readily quantifiable by traditional risk analysis;

the expected value method is used to evaluate the impact of project-specific risks suitable for traditional risk analysis; and

the simultaneous use of the two methods leverages their respective strengths.

The expected value analysis is performed in Microsoft Excel, and the neural network is programmed in Visual Basic, to enable automated interaction between the two contingency estimation methods. The next section will provide a brief description of neural networks. This includes subsections outlining neural network model architecture and training.

3.1 Artificial neural network modelling

Chen and Hartman [11] define an artificial neural network (ANN) as follows: "an ANN is an information processing technology that simulates the human brain and nervous system". When presented with input and output data (called the training set), the network has the ability to find the function that describes the relationship between them. When new input data is fed in, output data can be obtained according to the approximation function. A typical ANN comprises a group of processing elements organised into a sequence of layers. Successive layers are connected by means of connection weights. Different layers contain different types of nodes: input, output, and hidden nodes. Input nodes accept data presented to the network, output nodes produce network outputs, and hidden nodes represent the relationships in the data. One or more hidden layers are found between the input and output layers. All nodes 'communicate' through connections with certain assigned weights. When the ANN is presented with a data set, 'training' occurs through continuous adjustment of the connection weights using a training algorithm. This mimics the nature of the human brain, where neurons are organised in layers and connected by synapses [12]. Figure 1 [13] illustrates the components of a simple three-layer, single-output neural network.

 

 

In project contingency estimation, ANNs can be used to develop models for assessing and quantifying risk by identifying the parameters driving risk on a project and correlating them with the risks encountered. The values assigned to these parameters (qualitative or quantitative) should describe a pattern that can be linked easily to the risks encountered in projects. Project input data patterns and the associated level of risk (cost or duration growth) can then be used to train a neural network. The input-output mapping obtained in this manner is similar to the result of a regression analysis between input and response variables.

Neural network analysis is, however, favoured above regression analysis. Research conducted by Chen and Hartman [11] demonstrates the superior performance of neural networks compared with that of regression analysis in the project management environment. Also, the lack of upfront knowledge on the nature of the relationships between inputs (risk drivers) and outputs (project cost growth or duration growth) opens up the possibility that regression analysis might not be successful, as the method requires the cost or duration function to adhere to a defined mathematical form.

The next three subsections explain how the architecture of the neural network was determined, which method and algorithm(s) were selected for training, and how training data was obtained.

3.2 ANN architecture

Six systemic risk drivers were identified as input variables for the study environment in question:

1. Project definition level.

2. Latest approved project cost.

3. Latest approved project duration.

4. Business category.

5. Voltage category.

6. Job category.

Project definition level refers to the required level of definition at the project stage in question, and is represented by a number between 0 and 1, where 1 denotes complete definition. It is assumed that the level of definition required at each project stage represents the actual level of project definition at the relevant stage gate with reasonable accuracy. The terms 'latest approved project cost' and 'latest approved project duration' refer to the approved cost and duration. Project duration is calculated from the date of concept release approval (at which point the concept becomes a project) to the date of commissioning/energising the asset. Business category, voltage category, and job category refer to the relevant project categories, as discussed earlier (see Section 2).

Burroughs and Juntima [14] propose the following drivers for use in neural network modelling: project definition level, process complexity, contracting strategy, equipment percentage, and use of new technology. With reference to the input variables identified above, the project definition level is represented by the variable of the same name, and all variables, save project definition level, combine to describe process complexity. Contracting strategy is not added as an input variable, as all projects under consideration follow a similar strategy. Equipment percentage is represented by the combination of the last three variables (similar project types are assumed to have similar equipment budgets). As none of the network asset construction projects used in the study were the first of their kind (being standard and repeatable projects), a variable representing use of new technology was not included.

To be able to use neural network modelling for contingency estimation at different phases in a project's lifecycle, estimate computations should be based on dynamic evaluations of input drivers at each project phase [15]. For this reason, data regarding three project lifecycle points was obtained for ANN training (end of pre-project planning stage, end of concept stage, and end of definition stage). Thus three input patterns were available per project, each containing the value of all input variables at the relevant point in the project lifecycle.

Three of the input variables are categorical variables, where 'categorical' refers to a variable denoting a decision between different categories/levels. One level should not be seen as superior to another; they cannot be organised sequentially along a numerical scale, as this would suggest a relative importance that does not exist. As with all function-mapping techniques, ANN models cannot interpret categorical variables that cannot be put in a progressive order as single input variables [16]. Categorical input variables were therefore incorporated through indicator variables, where a categorical variable with h levels can be modelled by h - 1 indicator variables, as illustrated in Table 1. In this case, the voltage category was used as an example. I5idenotes the ;th indicator variable representing the fifth neural network input risk driver (voltage category).

 

 

With this approach, eight input nodes would be required to represent all six systemic input variables (risk drivers). Two output nodes were used to represent the two independent output variables:

1. required contingency cost percentage due to systemic risks; and

2. required contingency duration percentage due to systemic risks.

With regard to the hidden layer of the neural network, two choices needed to be made:

1. How many hidden layers would be used?

2. How many hidden nodes would there be in each layer?

A multilayer feed-forward neural network with one hidden layer was employed because problems that require more than one hidden layer are rare [17]. The selection of the number of hidden nodes in this layer is very important. Using too few nodes could result in 'underfitting', in which case there would be too few nodes to detect the signals in a complicated data set adequately. On the other hand, using too many nodes could result in 'overfitting', in which case the training set would not be sufficient to train all the nodes in the hidden layer [17, 18], resulting in the network overfitting itself to the available content. This would lead to bad generalisation when the network is confronted with new input data. Even in the case where there are enough training patterns, too many hidden nodes could still increase the training time to an extent that makes it difficult to train the neural network adequately.

As heuristics suggested by Heaton [17] indicate that between two and 16 hidden nodes needed to be used in this study, our approach was to select the number of nodes (ranging between these values) based on the accuracy of the model obtained from independent model runs. This will be discussed further in the section on the ANN results.

The contingency cost and duration percentages to be computed by the neural network in this study were anticipated to be positive additions to the project budget or duration. However, there were cases in the training set of historical data where projects were finished under budget or earlier than planned. In order for the backpropagation method (see Section 3.3) to be applied, the activation function used in the hidden and output layers must be continuous and differentiable. The sigmoid function is by far the most frequently applied method, as it exhibits a balance between linear and nonlinear behaviour [19], and was therefore the activation function type applied to the hidden and output layer nodes in the ANN model of this study. The hyperbolic tangent function is a form of the sigmoid function returning both positive and negative numbers. The contingency cost and duration percentages to be computed by the neural network in this study were anticipated to be positive additions to the project budget or duration. However, there were cases in the training set of historical data where projects were finished under budget or earlier than planned. To enable the network to approximate these 'negative' (in terms of the ANN) outcomes alongside the positive outcomes, the hyperbolic tangent function was applied as an activation function.

3.3 ANN training method and algorithm

There are many ways in which neural networks can be trained. Most of these fall into the categories of supervised, unsupervised, or hybrid training methods [17]. Supervised training occurs by presenting the network with a set of input data and corresponding anticipated output data regarding an environment that is unknown to the neural network of interest. Unsupervised training occurs when only input data is presented to the network. Hybrid methods of supervised and unsupervised training also exist.

The learning task that the neural network in this study is required to perform can be classified as function approximation: the purpose of the neural network design is to approximate an unknown nonlinear input-output mapping function. This type of learning task is a perfect candidate for supervised training [19].

Several learning algorithms were considered for use in this study, including backpropagation, simulated annealing, and genetic algorithms. Backpropagation (BP) is a gradient descent method that is popularly applied to search for the ANN connection weights at which the difference between actual and desired ANN outputs are minimised. To overcome the weaknesses encountered with gradient descent algorithms, global search algorithms can be applied to adjust the weights of a neural network. When applied in this way, Sexton et al. [20] show that simulated annealing (SA) and genetic algorithms both outperform backpropagation as a training algorithm by allowing the solution vector to escape local minima and seek an improved global solution. Note that in such an application, the values of the weights are only updated at the end of each epoch - i.e., when all training patterns have been considered. This is referred to as batch learning, as opposed to per-example learning where weights are adjusted after each training pattern, as in the case of traditional backpropagation [18].

The SA algorithm was therefore applied alongside the backpropagation algorithm in the neural network model used in this study. A method similar to the one proposed by Ludermir et al. [21] was used. Re-annealing (an increase of the temperature parameter in the SA algorithm) was periodically performed when the SA algorithm started to converge, to ensure that the entire solution space was searched and a local minimum was not passed off as a global minimum [22]. The main steps in the overall logical structure of this model, including the parameters that needed to be defined, are outlined in Figure 2.

 

 

As the training algorithm applied in this study aimed to minimise two objective functions, namely the error function of each of the two output variables, a multi-objective SA algorithm was applied. Suman and Kumar [23] discuss several techniques to modify the single-objective SA algorithm for application in multi-objective problems. For the purpose of this study, multi-objective SA using a Pareto-domination-based acceptance criterion (PDMOSA) was selected.

3.4 ANN training data

To enable training of the neural network model, a set of training patterns was obtained from historical data and semi-structured interviews with project managers concerning projects completed in the 2010/2011 and 2011/2012 financial years. Each training pattern comprises multiple input values that 'feed' the corresponding input nodes of the neural network, and two output values that will be compared with the values generated by the neural network's two output nodes during training.

Data regarding model input variables could be obtained directly from historical project data, whereas the impact of systemic risks on cost and duration growth, as required for the two response variables, was not explicitly available. The main objective of the interviews with the project managers was therefore to gather data for neural network training by uncovering causes for project cost and duration growth of completed projects, in order to distinguish between the impacts of systemic and project-specific risks in these areas.

A list of categorised causes and corresponding primary causes contributing to project cost and duration growth was compiled for use during the interview process. Interviewees did not need to state whether the selected causes were systemic or not; the causes chosen by them were used to indicate this, as the researchers studied the list of causes beforehand, so as to categorise them as either systemic or project specific.

After showing the project manager the data relating to the cost and duration of the project concerned, and the list of causes linked to project cost and duration growth, the following questions were posed with regard to each project:

1. Which primary/root cause contributed to project cost growth and schedule delay from the date of concept release approval to the date of definition release approval? If more than one cause is applicable, to what extent did each primary/root cause contribute?

2. Which primary/root cause contributed to project cost growth and schedule delay from the date of definition release approval to the date of execution release approval? If more than one cause is applicable, to what extent did each primary/root cause contribute?

3. Which primary/root cause contributed to project cost growth and schedule delay from the date of execution release approval to the date of finalisation release approval (with all revisions taken into account)? If more than one cause is applicable, to what extent did each primary/root cause contribute?

4. Is there anything we have not touched on that you feel is important regarding the project in question?.

In this manner, three training patterns were obtained for each project, each one associated with a different point in the project lifecycle (end of pre-project planning stage, end of concept stage, and end of definition stage). Outputs from 22 interviews provided three data patterns for each of the 89 completed refurbishment, strengthening, and direct customer projects. Electrification projects were excluded from the data set due to the high level of political influence involved in these projects. The environment in which these projects are executed is fairly isolated when compared with other network asset construction projects. In essence, these projects form a system of their own. After interview data had been gathered and processed to remove project-specific and other impacts, further data preparation had to be performed before model training could begin. These preparatory steps included the scaling of non-categorical variables, the use of indicator variables to represent categorical variables, and the removal of outlier training patterns. After data preparation had been completed, 135 training patterns relating to 85 projects remained for ANN training, validation, and testing. The next section describes how the expected value analysis of project-specific risks was incorporated into the contingency estimation tool.

 

4 EXPECTED VALUE (EV) ANALYSIS

The expected value (EV) analysis method for contingency estimation directly estimates the cost or duration impact(s) of each significant identified risk [4]. The EV is obtained by multiplying the cost or duration risk impact by the probability of occurrence. If no Monte Carlo or similar simulation is run, the project contingency can be determined at this stage as the sum of the expected cost or duration values (probability x impact) of the individual risks.

In the developed model, a Microsoft Excel risk register acts as a departure point for the EV analysis to estimate project contingency for project-specific risks. The project manager selects the cause category and primary cause for each identified risk, after which anticipated pre- and post-mitigation delay, cost, and likelihood are populated.

Hereafter, the EV contingency amount is determined as the sum of the EVs (probability x impact) for all risks. To ensure that the project-specific contingency value estimated in this fashion is not overly conservative, the method proposed by Khamooshi and Cioffi [9] is applied, employing the binomial distribution to estimate how many of these risks will occur. The calculation steps performed by the tool to determine Contingency PS (contingency due to project-specific risks) are as follows:

1. Determine binomial distribution parameters:

p (probability of 'success' in each trial) calculated as the average of the probabilities of all identified risks.

n (number of trials) being equal to the number of identified risks.

2. Use the binomial distribution to determine the number of risks expected to occur (a), for which the contingency budget/duration should account.

3. Sort risks according to risk rating (largest to smallest).

4. Calculate the contingency for both cost and duration as the sum of the EVs of the a largest risks using

where mi, is the impact that occurs with probability pi.

Note that in the Excel-based tool, the integer quantity a is determined using an equation provided by Khamooshi and Cioffi [9] for cases where 20 or more risks are being considered at a 99 per cent confidence level (the probabilistic maximum number of risks, a, will be exceeded only 1 per cent of the time). The assumption is therefore made that 20 or more risks will be identified for each project. This assumption holds when considering outputs of risk workshop sessions on previous projects in the study environment. If fewer than 20 risks are identified, the equation will cause a larger number of anticipated risks, a, to be reflected than would have been read directly from the corresponding cumulative probability distribution at a 99 per cent confidence level. This is a desirable effect, as the identification of a smaller number of risks might indicate that the project team in question is inexperienced in risk identification. A more conservative reflection of the contingency amount therefore makes sense.

Finally, the total cost distribution is approximated by the assumption of a normal distribution with a standard deviation that is equal to the contingency value, as proposed by Rothwell [24]. The budgeted project cost plus the estimated contingency amount for systemic and project-specific risks represent the mean total project cost, which, in the case of normally-distributed data, is equivalent to the median value (the point at which project cost overrun or underrun is equally likely) [6].

 

5 INTEGRATION OF ANN AND EV ANALYSIS

While the neural network is trained, the set of connection weights linked to the lowest training errors (TE) and validation errors (VE) is stored for application in the contingency estimation tool. Upon opening the contingency estimation tool, the following steps are followed /executed :

1. The user enters/selects values for neural network input variables (definition level, cost, duration, voltage category, job category) in a user form.

2. These values are sent as input to the trained neural network, which returns contingency cost and duration percentages representing systemic risk impact.

3. Generated percentages are multiplied by project cost and duration (as per user input) respectively to obtain the monetary value of the cost impact and the duration impact in days.

4. These calculated Contingency SY (contingency due to systemic risks) cost and duration values are automatically entered as a 'one-liner' (representing systemic risks) with 100 per cent probability in the risk register.

5. ContingencyT (total contingency) is determined with

ContingencT = ContingencyPS + ContingencySY for both cost and duration.

The results of the study will be discussed in the next section.

 

6 RESULTS

6.1 Interview results

Before beginning training, further analysis was conducted on data gathered during the interview process by applying Benford's Law [25] to ensure data integrity. A nonzero base ten integer starts with some digit other than zero (a digit from one to nine). It might be expected that, in a random data set, it would be equally probable that a data element's leading digit would be one or five or any other number. This would amount to an equal probability of roughly 11 per cent of a given data element starting with any one of the possible nine leading digits. Benford's Law states that this is not the case, and that in a random data set, about 30 per cent of the random numbers would start with a one, 18 per cent of the numbers would begin with a two, and so forth, down to less than 5 per cent of numbers starting with nine. This law can be applied to test whether data provided by human input is truly random, or whether the data is skewed intentionally towards 'seemingly average' values that are perceived to 'sound good'. Leading digit frequencies of the training data gathered corresponded strongly to values expected for a random data set according to Benford's Law, as confirmed by a chi-squared goodness-of-fit test, providing confidence that training could proceed. Thus about 30 per cent of the values in the training data set start with a one, about 18 per cent with a two, and so on.

Although the main objective of the 22 interviews was to gather neural network training data, the format of the interviews enabled the researchers to distinguish between systemic and project-specific risk impacts on the cost and duration growth of completed projects. Another outcome of the interviews was therefore the frequency of identified project-specific causes relevant to the 89 interview projects, as well as the extent to which events linked to these causes had an impact on each project. To ensure that the knowledge obtained during these interviews is not lost, results regarding project-specific risks concerning previously completed projects will be populated in a 'lessons learned' database. The database will be made available to aid future project-specific risk identification and analysis.

6.2 ANN results

Investigation of initial model runs showed that data relating to the pre-project planning stage of the project lifecycle contained high levels of noise due to large changes in the project cost estimation process during the study period, and flexibility in the project scope in that part of the project lifecycle. This made the application of the model in the pre-project planning stage of the project lifecycle impractical. For this reason, only data from the concept, definition, and execution stages were used as project input.

As discussed, there were eight possible input nodes representing the six chosen input variables (risk drivers). More input variables require more training examples to reach a given accuracy. To ensure that the model was not unnecessarily complex, different scenarios using varying combinations of the six risk drivers for model input were tested, as shown in Table 2.

 

 

For each of the five scenarios, the varying number of input variables presented to the model required a correspondingly varied number of hidden nodes to be considered. Guided by several heuristics proposed by Heaton [17], each scenario was evaluated at three to four 'levels' of hidden nodes. By comparing the results obtained for different numbers of hidden nodes within the input scenario of each risk driver, the best output for each scenario could be obtained, as shown in Table 3. The neural network scenario models were trained using 99 patterns relating to 67 projects, and were applied to a validation data set including nine projects composed of 18 data patterns. The validation set results were used to compare different model scenarios, and the chosen model was tested using the testing data set (18 patterns relating to nine projects). This was modelled on the approach adopted by Pewdum et al. [26].

 

 

The exclusion of the business category risk driver and its two corresponding indicator input variables (leaving six input nodes of the possible eight remaining while representing five input risk drivers of the possible six) with the use of nine hidden nodes was identified as the best scenario, as shown in Table 3. Validation error levels are rivalled by only one other scenario (exclusion of job category) and the training error levels were used as a 'tiebreaker' to motivate the choice of scenario.

It should be noted that the error levels were still reasonable in the case of the scenario presented in Table 3, in which all of the categorical inputs were excluded (the network viewed all project types as being similar). This was not the case in a similar study conducted by Lhee et al. [16], where categorical variables were not included as neural network input, but rather used as a means to split data into groups, and a separate network was trained for each type of project. Chen and Hartman [11] also report improved results with regard to contingency estimation using a neural network, when project data is grouped into two or more disjoint sets based on, for example, project cost range, and each set is used with a separate neural network. This difference in results could be attributed to the fact that while Lhee et al. [16] and Chen and Hartman [11] were searching for networks to model all project risks and estimate total contingency, the ANN model in this study aims to approximate only the impact of systemic risks, which are more likely to follow a pattern throughout all types of projects, as they apply to all projects within the system.

To enable the neural network to be objectively evaluated against a competing method, its performance was compared with that of a multiple linear regression (MLR) model using the chosen scenario (exclusion of the business category as a model input scenario with nine hidden nodes). Table 4 summarises MLR coefficients for the intercept and model input variables.

 

 

As with the ANN, the MLR model was trained using the training set and then applied to the validation set. The corresponding results for both the ANN and the MLR model are presented in Table 5.

 

 

The values predicted by the ANN show improved validation error levels and 'fit' (R2) when compared with those of the MLR, as can be seen from the increased R2 values and the reduced error levels. However, the difference (especially with respect to R2) is not as significant as was anticipated. This could be attributed to the fact that the neural network does not yet have enough training patterns to address all possible scenarios, and results are expected to outperform those of the MLR further as additional training patterns become available. Also, as Chen and Hartman [11] and Lhee et al. [16] report R2 values between 0.23 and 0.55 for similar neural network applications estimating total project contingency, the results are not seen as abnormally low.

The validation errors of the chosen scenario are considered reasonably acceptable, as the average validation error across the two output nodes (contingency cost percentage due to systemic risks and contingency duration percentage due to systemic risks) is 10.1 per cent, and an average batch validation error level of 10 per cent across all output nodes is generally considered good during network validation [17]. Also, the error levels are comparable to those found in similar studies in the project management field: the accuracy is not as high as in other fields of application due to high variability inherent in data related to the management of projects. For cost and duration, the corresponding variances of the errors for the chosen scenario were determined as 8.296 x 10-3 and 2.103 x 10-2 respectively, and the corresponding testing errors were determined as 6.0 per cent and 11.1 per cent respectively.

The next section will conclude the paper with an overview of possible future research and a discussion of benefits linked to the use of the hybrid method.

 

7 CONCLUSIONS

7.1 Benefits of hybrid method

The tool developed in this study is a clear improvement on the subjective 'expert opinion' approach that is currently applied during contingency estimation in the study environment. The input drivers for systemic contingency estimation do not leave room for subjectivity, as is the case in some other neural network modelling applications for the estimation of project contingency. As neural network model outputs are based on actual data, expert knowledge is brought to the contingency estimation process while simultaneously decreasing subjectivity and man hours required. Also, as the trained neural network estimates contingency cost and duration due to systemic risks separately from the EV analysis of project-specific risks, there is no need for the project team to attempt to analyse systemic risks which, due to their stochastic nature, are not easy to predict or understand.

Another advantage of the proposed model, as opposed to a model where the contingency is calculated as a direct output and the user does not analyse any risks, is that the necessity of user involvement prompts thought and debating processes that will be more likely to provide a full view of possible risks on a project. The phenomenon described as 'when models turn on, brains turn off' is avoided.

The tool is applicable to all levels of project definition, save the pre-project planning stage. This is not seen as an extreme disadvantage, as the risk management process for standard and repeatable projects used within the study environment does not require the estimation of contingency before the concept stage is reached. It is, however, possible to apply the EV portion of the model in isolation during the pre-project planning stage, if a rough indication of required contingency is deemed necessary.

Although the assignment of contingency to each project-specific risk is transparent, the same cannot be said for the assignment of contingency for systemic risks using the neural network. Neural networks are essentially 'black boxes', and their output cannot be explained. However, in the proposed tool the neural network is only applied to address risks that, by their nature, are hard to relate to specific budget or schedule items and are therefore difficult to describe in a risk register format. The disadvantage is therefore not seen as an impediment to the successful application of the tool.

By following the proposed approach, the contingency estimation tool can be used without expert knowledge of project risk management. It also provides contingency cost and duration output on project level, and covers both systemic and project-specific risks in a single tool. The decreased subjectivity facilitated by this approach thereby enables risks related to different projects to be reflected on portfolio level more consistently.

7.2 Future research

Although the ANN developed in this study was shown to outperform MLR, it was not compared with any other regression function types such as polynomial or exponential functions, due to the lack of information on the input-output mapping function, if a defined mathematical form exists at all. If an appropriate regression function were to be found for the data set by conducting a more exhaustive search on possible function formats, this approach would benefit from the transparency of the input-output mapping relationship. This could be a possible topic for future research.

A basic version of the developed EV risk register (excluding the expected number of risks to occur and the total cost distribution) is being rolled out nationally on Eskom's standard and repeatable projects as a risk analysis and contingency estimation tool. The improved hybrid tool that was the final output of this study is currently only applicable to the Distribution Western Cape Operating Unit due to the boundaries of the development data set. It has been proposed that data from other operating units be gathered, using questionnaires to determine whether expanding the neural network training data set in this way would make the developed contingency estimation tool applicable for use on all standard and repeatable projects in Eskom.

 

REFERENCES

[1] Hollmann, J., Caddell, C., Curran, M., Dysert, L., Gruber, C. & Humphreys, K. 2008. Contingency estimating - General principles. Recommended Practice 40R-08, AACE International, Inc.         [ Links ]

[2] Hollmann, J. 2007. The Monte-Carlo challenge: A better approach. Morgantown, USA: AACE International, Inc.         [ Links ]

[3] Project Management Institute, Inc. 2008. A guide to the project management body of knowledge (PMBOK® Guide). 4th edition, Pennsylvania, USA: Newton Square.         [ Links ]

[4] Hollmann, J., Accioly, R., Adams, R., Boukendour, S., Cretu, O., Portigal, M. & Zhao, J. 2008. Risk analysis and contingency determination using expected value. Recommended Practice 44R-08, AACE International, Inc.         [ Links ]

[5] Karlsen, J. & Lereim, J. 2005. Management of project contingency and allowance. Cost Engineering, 47, pp. 24-29.         [ Links ]

[6] Hollmann, J., Adams, R., Brandts, H., Chilcott, A., Cretu, O., Pospisil, C., Ramachandran, C., Vrijland, M. & Wells, R. 2009. Risk analysis and contingency determination using parametric estimating. Recommended Practice 42R-08, AACE International, Inc.         [ Links ]

[7] Smith, G. & Bohn, C. 1999. Small to medium contractor contingency and assumption of risk. Journal of Construction Engineering and Management, 125, pp. 101-108.         [ Links ]

[8] Noor, I. & Tichacek, R. 2009. Contingency misuse and other risk management pitfalls. Cost Engineering, 51, pp. 28-33.         [ Links ]

[9] Khamooshi, H. & Cioffi, D. 2009. Program risk contingency budget planning. IEEE Transactions on Engineering Management, 56, pp. 171-179.         [ Links ]

[10] Hollmann, J. 2010. Alternate methods for integrated cost and schedule contingency estimating. Atlanta, USA: AACE International, Inc.         [ Links ]

[11] Chen, D. & Hartman, T. 2000. A neural network approach to risk assessment and contingency allocation. Morgantown, USA: AACE International, Inc.         [ Links ]

[12] Bode, J. 1998. Neural networks for cost estimation. Cost Engineering, 40, pp. 25-30.         [ Links ]

[13] Hegazy, T. & Ayed, A. 1998. Neural network model for parametric cost estimation of highway projects. Journal of Construction Engineering and Management, 124, pp. 210-218.         [ Links ]

[14] Burroughs, S. & Juntima, G. 2004. Exploring techniques for contingency setting. Morgantown, USA: AACE International, Inc.         [ Links ]

[15] Trost, S. & Oberlender, G. 2003. Predicting accuracy of early cost estimates using factor analysis and mulivariate regression. Journal of Construction Engineering and Management, 129, pp. 198-204.         [ Links ]

[16] Lhee, S., Issa, R. & Flood, I. 2012. Prediction of financial contingency for asphalt resurfacing projects using artificial neural networks. Journal of Construction Engineering and Management, 138, pp. 22-30.         [ Links ]

[17] Heaton, J. 2005. Introduction to neural networks with Java. Chesterfield, USA: Heaton Research, Inc.         [ Links ]

[18] Chuang, C., Su, S. & Hsiao, S. 2000. The Annealing Robust Backpropagation (ARBP) learning algorithm. IEEE Transactions on Neural Networks, 11, pp. 1067-1077.         [ Links ]

[19] Haykin, S. 1999. Neural networks - A comprehensive foundation. 2nd edition, Delhi, India: Pearson Education, Inc.         [ Links ].

[20] Sexton, R., Dorsey, R. & Johnson, J. 1999. Optimization of neural networks: A comparative analysis of the genetic algorithm and simulated annealing. European Journal of Operational Research, 114, pp. 589-601.         [ Links ]

[21] Ludermir, T., Yamazaki, A. & Zanchettin, C. 2006. An optimization methodology for neural network weights and architectures. IEEE Transactions on Neural Networks, 17, pp. 1452-1459.         [ Links ]

[22] Treadgold, N. & Gedeon, T. 1998. Simulated annealing and weight decay in adaptive learning: The SARPROP algorithm. IEEE Transactions on Neural Networks, 9, pp. 662-668.         [ Links ]

[23] Suman, B. & Kumar, P. 2006. A survey of simulated annealing as a tool for single and multi-objective optimization. Journal of the Operational Research Society, 57, pp. 1143-1160.         [ Links ]

[24] Rothwell, G. 2005. Cost contingency as the standard deviation of the cost estimate. Cost Engineering, 47, pp. 22-25.         [ Links ]

[25] Tichenor, C. & Davis, B. 2008. The applicability of Benford's Law to the buying behavior of foreign military sales customers. Global Journal of Business Research, 2, pp. 77-85.         [ Links ]

[26] Pewdum, W., Rujirayanyong, T. & Sooksatra, V. 2000. Forecasting final budget and duration of highway construction projects. Engineering, Construction and Architectural Management, 16, pp. 544-557.         [ Links ]

 

 

* Corresponding author

Creative Commons License Todo el contenido de esta revista, excepto dónde está identificado, está bajo una Licencia Creative Commons