SciELO - Scientific Electronic Library Online

 
vol.20 issue1Contingent convertible bonds as countercyclical capital measuresComparative advantage, economic structure and growth: The case of Senegal author indexsubject indexarticles search
Home Pagealphabetic serial listing  

Services on Demand

Article

Indicators

Related links

  • On index processCited by Google
  • On index processSimilars in Google

Share


South African Journal of Economic and Management Sciences

On-line version ISSN 2222-3436
Print version ISSN 1015-8812

S. Afr. j. econ. manag. sci. vol.20 n.1 Pretoria  2017

http://dx.doi.org/10.4102/sajems.v20i1.1490 

ORIGINAL RESEARCH

 

A proposed best practice model validation framework for banks

 

 

Pieter J. (Riaan) de JonghI; Janette LarneyI; Eben MareII; Gary W. van VuurenI; Tanja VersterI

ICentre for Business Mathematics and Informatics, North-West University, South Africa
IIDepartment of Mathematics and Applied Mathematics, University of Pretoria, South Africa

Correspondence

 

 


ABSTRACT

BACKGROUND: With the increasing use of complex quantitative models in applications throughout the financial world, model risk has become a major concern. The credit crisis of 2008-2009 provoked added concern about the use of models in finance. Measuring and managing model risk has subsequently come under scrutiny from regulators, supervisors, banks and other financial institutions. Regulatory guidance indicates that meticulous monitoring of all phases of model development and implementation is required to mitigate this risk. Considerable resources must be mobilised for this purpose. The exercise must embrace model development, assembly, implementation, validation and effective governance.
SETTING: Model validation practices are generally patchy, disparate and sometimes contradictory, and although the Basel Accord and some regulatory authorities have attempted to establish guiding principles, no definite set of global standards exists.
AIM: Assessing the available literature for the best validation practices.
METHODS: This comprehensive literature study provided a background to the complexities of effective model management and focussed on model validation as a component of model risk management
RESULTS: We propose a coherent 'best practice' framework for model validation. Scorecard tools are also presented to evaluate if the proposed best practice model validation framework has been adequately assembled and implemented.
CONCLUSION: The proposed best practice model validation framework is designed to assist firms in the construction of an effective, robust and fully compliant model validation programme and comprises three principal elements: model validation governance, policy and process.


 

 

Introduction

Model validation is concerned with mitigating model risk and, as such, is a component of model risk management. Since the objective of this article is to provide a framework for model validation, it is important to distinguish between model risk management and model validation. Below, we define and discuss both these concepts.

Model risk management comprises robust, sensible model development, sound implementation, appropriate use, consistent model validation at an appropriate level of detail and dedicated governance. Each of these broad components is accompanied and characterised by unique risks which, if carefully managed, can significantly reduce model risk. Model risk management is also the process of mitigating the risks of inadequate design, insufficient controls and incorrect model usage. According to McGuire (2007), model risk is 'defined from a SOX (USA's Sarbanes-Oxley Act) Section 404 perspective as the exposure arising from management and the board of directors reporting incorrect information derived from inaccurate model outputs'. The South African Reserve Bank (SARB 2015b) uses the definition of model risk as envisaged in paragraph 718(cix) of the revisions to the Basel II market risk framework:

two forms of model risk: the model risk associated with using a possibly incorrect valuation methodology; and the risk associated with using unobservable (and possibly incorrect) calibration parameters in the valuation model. (n.p.)

In its Solvency Assessment and Management (SAM) Glossary, the Financial Services Board (FSB) defines model risk as 'The risk that a model is not giving correct output due to a misspecification or a misuse of the model' (FSB 2010). In a broader business and regulatory context, model risk includes the exposure from making poor decisions based on inaccurate model analyses or forecasts and, in either context, can arise from any financial model in active use (McGuire 2007). North American Chief Risk Officers (NACRO) Council (2012) identified model risk as 'the risk that a model is not providing accurate output, being used inappropriately or that the implementation of an appropriate model is flawed' and proposed eight key validation principles. The relevance of model risk in South Africa is highlighted by the Bank Supervision Department of the SARB in its 2015 Annual Report, where it is specifically noted that some local banks need to improve model risk management practices (SARB 2015a). Model validation is a component of model risk management and requires confirmation from independent experts of the conceptual design of the model, as well as the resultant system, input data and associated business process validation. These involve a judgement of the proper design and integration of the underlying technology supporting the model, an appraisal of the accuracy and completeness of the data used by the model and verification that all components of the model produce relevant output (e.g. Maré 2005). Model validation is the set of processes and activities intended to verify that models are performing as expected, in line with their design objectives and business uses (OCC 2011b). The Basel Committee for Banking Supervision's (BCBS) minimum requirements (BCBS 2006) for the internal ratings-based approach require that institutions have a regular cycle of model validation 'that includes monitoring of model performance and stability; review of model relationships; and testing of model outputs against outcomes'.

In this article, we assess the available literature for validation practices and propose a coherent 'best practice' procedure for model validation. Validation should not be thought of as a purely mathematical exercise performed by quantitative specialists. It encompasses any activity that assesses how effectively a model is operating. Validation procedures focus not only on confirming the appropriateness of model theory and accuracy of program code, but also test the integrity of model inputs, outputs and reporting (FDIC 2005).

The remainder of this article is structured as follows: The next section provides a brief literature overview of model risk from a validation perspective. 'Overview of the proposed model validation framework' section establishes an overview of the proposed framework for model validation and, in 'model validation framework discussion' section; this framework is discussed in more detail. Guidelines for the development of scorecard tools for incorporation in the proposed best practice model validation framework are presented in 'model validation scorecards' section. Some concluding remarks are made in 'conclusions and recommendations' section. Examples illustrating the importance of proper model validation are given in Appendix 1 and scorecards for the evaluation of the main components of the validation framework are provided in Appendix 2.

 

Brief overview of model risk from a validation perspective

Banks and financial institutions place significant reliance on quantitative analysis and mathematical models to assist with financial decision-making (OCC 2011a). Quantitative models are employed for a variety of purposes including exposure calculations, instrument and position valuation, risk measurement and management, determining regulatory capital adequacy, the installation of compliance measures, the application of stress and scenario testing, credit management (calculating probability and severity of credit default events) and macroeconomic forecasting (Panko & Ordway 2005).

Markets in which banks operate have altered and expanded in recent years through copious innovation, financial product proliferation and a rapidly changing1 regulatory environment (Deloitte 2010). In turn, banks and other financial institutions have adapted by producing data-driven, quantitative decision-making models to risk-manage complex products with increasing ambitious scope, such as enterprise-wide risk measurement (OCC 2011b).

Bank models are similar to engineering or physics models in the sense that they are quantitative approaches which apply statistical and mathematical techniques and assumptions to convert input information - which frequently contains distributional information - into outputs. By design, models are simplified representations of the actual associations between observed characteristics. This intentional simplification is necessary because the real world is complex, but it also helps focus attention on specific, significant relational aspects to be interrogated by the model (Elices 2012). The precision, accuracy, discriminatory power and repeatability of the model's output determine the quality of the model, although different metrics of quality may be relevant under different circumstances. Forecasting future values requires precision and accuracy, for example, rank ordering of risk may require greater discriminatory power (Morini 2011). Understanding the capabilities and limitations of models is of considerable importance and is often directly related to the simplifications and assumptions used in the model's design (RMA 2009).

Input data may be economic, financial or statistical depending on the problem to be solved and the nature of the model employed. Inputs may also be partially or entirely qualitative or based on expert judgement [e.g. the model by Black and Litterman (1992) and scenario assessment in operational risk by de Jongh et al. (2015)], but in all cases, model output is quantitative and subject to interpretation (OCC 2011b). Decisions based upon incorrect or misleading model outputs may result in potentially adverse consequences through financial losses, inferior business decisions and ultimately reputation damage: These developments stem from model risk, which arises because of two principal reasons (both of which may generate invalid outputs):

  • fundamental modelling errors (such as incorrect or inaccurate underlying input assumptions and/or flawed model assembly and construction) and

  • inappropriate model application (even sound models which generate accurate outputs may exhibit high model risk if they are misapplied, e.g. if they are used outside the environment for which they were designed).

Model risk managers, therefore, need to take account of the model paradigm as well as the correctness of the implementation of any algorithms and methodologies to solve the problem as well as the inputs used and results generated. NACRO Council (2012) asserted that model governance should be appropriate and the model's design and build should be consistent with the model's proposed purpose. The model validation process should have an 'owner', that is, someone uniquely responsible, and should operate autonomously (i.e. avoid conflicts of interest). The validation effort should also be commensurate with the model's complexity and materiality. Input, calculation and output model components should be validated and limitations of each should be addressed and the findings comprehensively documented (Rajalingham 2005). As far as the model paradigm is concerned, the model needs to be evaluated in terms of its applicability to the problem being solved, and the associated set of assumptions of the model needs to be verified in terms of its validity in the particular context. Example 1 in Appendix 1 gives an illustration of the inappropriateness of the assumptions of the well-known Black-Scholes option pricing model in a South African context. Clearly, all listed assumptions may be questioned, which will shed doubt on the blind application of the model in a pricing context. Some models have been implemented using spreadsheets (Whitelaw-Jones 2015). Spreadsheet use in institutions range from simple summation and discounting to complex pricing models and stochastic simulations. Madahar, Cleary and Ball (2008) questioned whether every spreadsheet should be treated as a model, requiring the same rigorous testing and validating. Spreadsheet macros require coding and may be used to perform highly complex calculations, but they may also be used to simply copy outputs from one location to another (Galletta et al. 1993; PWC 2004). Requiring that all macro-embedded spreadsheets be subject to the same validation standards can be onerous (Pace 2008). Example 2 in Appendix 1 highlights some examples of formulas in Excel that provide incorrect answers. In addition, the European Spreadsheet Interest Group (EUSIG) maintains a database of such errors. EUSIG (2016) and Gandel (2013) provide examples of high impact Excel errors that occurred as a result of inadequate model validation. Therefore, the validation of the code is of extreme importance, and should be validated using not only ordinary but also stressed inputs.

Model risk increases with model complexity, input assumption uncertainties, the breadth and depth of the model's implementation and use. The higher the model risk, the higher the potential impact of malfunction. Pace (2008) identified challenges associated with effective model risk management programmes. Assigning the correct model definition to models is important, but challenging, because model types (e.g. stochastic, statistical, simulation and analytical) and model deployment methods (ranging from simple spreadsheets to complex, software-interlinked programmes) can sometimes straddle boundaries and defy easy categorisation (PWC 2004). Several authors (e.g. Burns 2006; Epperson, Kalra & Behm 2012; Haugh 2010; Pace 2008) argue that model classification is an important component of model risk management. The model validation process is also considerably simplified if models are classified appropriately and correctly according to their underlying complexity, relevance and impact on businesses.

Haugh (2010) presents practical applications of model risk and emphasises the importance of understanding a models' physical dynamics and properties. Example 3 in Appendix 1 illustrates a strong correlation between two variables that clearly does not share any causal relationship. Incorrect inclusion of such variables in models can lead to nonsensical conclusions and recommendations. The dangers of calibrating pricing models with one type of security and then pricing other types of securities using the same model can be disastrous. Model transparency is important and substantial risks were found to be associated with models used to determine hedge ratios. These conclusions, although specifically focussed on structured products (collateralised debt obligations) and on equity and credit derivative pricing models, could be equally applied to all models (Haugh 2010; PWC 2004). Example 4 in Appendix 1 gives some risk-related loss examples. These examples clearly illustrate that even simple calculation errors and incorrect models and assumptions can result in devastating losses. Actively managing model risk is important, but also costly, because not only does the validation of models requires expensive and scarce resources, but also the true cost of model risk management is much broader than this. The cost of robust model risk management processes includes having to maintain skilled and experienced model developers, model validators, model auditors and operational risk managers, as well as senior management time at governance meetings, opportunity costs (because of delays in time-to-market because of first having to complete the model risk management process before a new model supporting a new product can be deployed) and IT development cost of the model deployment.

Although model risk cannot be entirely eliminated, proficient modelling by competent practitioners together with rigorous validation can reduce model risk considerably. Careful monitoring of model performance under various conditions and limiting model use can further reduce risk, but frequent revision of assumptions and recalibration of input parameters using information from supplementary sources are also important activities (RMA 2009). Deloitte (2010) addressed internal model approval under Solvency II. Model validation was identified as a key activity in model management to ensure models remain 'relevant', that is, they function as originally intended both at implementation and over time. Ongoing monitoring to determine models' sensitivity to parameter changes and assumption revisions helps to reduce model risk. Deloitte's (2010) proposed validation policy includes a review of models' purpose and scope (including data, methodology, assumptions employed, expert judgement used, documentation and the use test), an examination of all tools used (including any mathematical techniques) and an assessment of the frequency of the validation process. Independent governance of the validation results, robust documentation and a model change policy (in which all changes to the model are carefully documented and details of changes are communicated to all affected staff) all contribute to effective model management (PWC 2004; Rajalingham 2005).

 

Overview of the proposed model validation framework

Despite the broad market requirement for a coherent model risk management strategy and associated model validation guidelines, the literature is not replete with examples. The Basel Accord and some regulatory authorities have attempted to establish this but, according to our knowledge, no definite set of global standards exists. However, although the literature places varying emphasis on different aspects of model governance, there are encouraging signs of cohesion and broad, common themes emerging. One of these common themes