Monte Carlo analysis has some key limitations which make the technique a major cause of concern for new project development. This article discusses four contradictions that every organization should be aware (and beware) of when developing new capital programs. A balanced approach using qualitative and quantitative techniques is advised.
Caveats
Out of the gate, let me say that I love Monte Carlo analysis. I have used the approach for over thirty years. I have degrees in engineering and finance. I built an asset management practice that used Monte Carlo analysis as a core technique and even did a keynote address at an international user conference on how to build this type of practice. My current practice uses Monte Carlo analysis as a core technique.
This article is not about what is right with Monte Carlo analysis but rather what is wrong with it.
What is Monte Carlo Analysis
Monte Carlo analysis is a computer-based method of analysis developed in the 1940s that uses statistical sampling techniques to obtain a probabilistic approximation to the solution of a mathematical equation or model. This 1997 definition from the United States Environmental Protection Agency (USEPA) is painfully simple – too simple by today’s standards – but there are a few key insights to be gained from it.
First, Monte Carlo analysis has been around for 75 years. It is not mainstream, yet.
Second, it involves statistical sampling techniques. There are many assumptions and judgments involved in statistics and statistical sampling.
Third, Monte Carlo analysis approximates a model (a model approximates what happens in the real world). Hmm, an approximation of an approximation.
If it makes it any clearer, the international risk standard explains “techniques such as Monte Carlo simulation provide a way of undertaking the calculations and developing results. Simulation usually involves taking random sample values from each input distribution, performing calculations to derive a result value, and then repeating the process through a series of iterations to build up a distribution of the results. The result can be given as a probability distribution of the value or some statistic such as the mean value.”
Uses According to ISO 31010
In general, Monte Carlo simulation can be applied to any system in which a set of inputs interact to define an output. The relationship between the inputs and outputs can be expressed as a set of dependencies analytical techniques are not able to provide relevant results, or when there is uncertainty in the input data
Monte Carlo simulation can be used as part of risk assessment for two different purposes:
uncertainty propagation on conventional analytical models or probabilistic calculations when analytical techniques do not work (or are not feasible).
Uses According to USEPA
Again, much insight can be gained from the USEPA when considering human health risk assessments. A Monte Carlo analysis may be useful when:
screening calculations using conservative point estimates fall above the levels of concern
it is necessary to disclose the degree of bias associated with point estimates of exposure
it is necessary to rank exposures, exposure pathways, sites, or contaminants
the cost of regulatory or remedial action is high and the exposures are marginal
the consequences of simplistic exposure estimates are unacceptable
Limitations According to ISO 31010
ISO 31010 provides its list of limitations for using Monte Carlo analysis.
The accuracy of the solutions depends upon the number of simulations that can be performed.
The use of the technique relies on being able to represent uncertainties in parameters by a valid distribution.
Setting up a model that adequately represents the situation can be difficult.
Large and complex models can be challenging to the modeler and make it difficult for stakeholders to engage with the process.
The technique tends to de-emphasize high consequence/low probability risks.
Monte Carlo analysis prevents excessive weight from being given to unlikely, high-consequence outcomes by recognizing that all such outcomes are unlikely to occur simultaneously across a portfolio of risks. This can have the effect of removing extreme events from consideration, particularly where a large portfolio is being considered. This can give unwarranted confidence to the decision maker.
Paradox 1
The first paradox is that this quantitative approach highly depends on qualitative assumptions. Statistical data is unavailable for things that matter most because we do not run them to failure. And despite formal elicitation methods for developing distributions based on expert judgment dating back to 1989, judgment still involves subjectivity.
For that matter, qualitative judgment is also needed to evaluate data quality and the representativeness of the underlying models.
Paradox 2
Despite being used as a technique to understand better uncertainty and a near-obsession by experts to use widely skewed distributions, Monte Carlo simulations lead us back to the center. This reality is non-intuitive, but the methodology fundamentally recognizes that extreme outcomes are unlikely to occur simultaneously across a portfolio.
Death, divorce, bankruptcy, and natural disasters are uncertainties that impact our personal lives in ways we do not expect. The same happens in business, and Monte Carlo analysis tends to lead us to not focus on the extreme events that usually cause the greatest uncertainties.
Paradox 3
The idea of running tens of thousands of independent scenarios and examining the cumulative results is the underlying foundation of Monte Carlo analysis. However, the most fundamental assumption of it (and most statistical analysis) is also the most troublesome – independence.
Many years ago, a chief executive asked me how I knew there was a 90 percent chance of success. I responded, in simple terms, that I had performed 1000 scenarios, and 900 succeeded. He said the result was not good enough because the system failed 100 times. 100 times! We chuckled at what is called a numerator bias, and we conveniently termed him highly risk-averse.
But we were only partially right.
We missed his instincts that we would never let the system fail 100 times under any long-term operation. We would intercede long before that happened and change the “equation.” In the real world, every subsequent scenario depends on the preceding ones, especially when it comes to the things that matter most.
True that. The lack of independence was why we had to use expert elicitation to develop the input failure distributions on which the output was based, too.
Paradox 4
I can’t add or detract much from what USEPA stated in 1997.
“One of the most important challenges facing the risk assessor is to communicate, effectively, the insights an analysis of variability and uncertainty provides. It is important for the risk assessor to remember that insights will generally be qualitative in nature even though the models they derive from are quantitative.”
Incremental Approaches
Am I a fan of Monte Carlo analysis for project development? You bet I am, despite its limitations. That is the subject of a different article.
For now, watch out for snake-oil salespeople who pitch it as a cure-all. Monte Carlo analysis is just one tool in the tool bag.
An incremental approach helps decide whether or not a Monte Carlo analysis can add value to an assessment and decision. A tiered approach begins with a simple screening-level model, usually qualitative. It progresses to more sophisticated, realistic, and quantitative models only as warranted by the findings and value added to the decision.
Ironically, the quantitative analysis usually ends full circle with a qualitative discussion that results in a decision.
JD Solomon Inc provides solutions at the nexus of the facilities, infrastructure, and the environment. Contact us for more information on our services related to quantitative risk analysis, forecasting using Monte Carlo simulations and the development of major capital programs. Sign-up for monthly updates on how we are applying reliability and risk concepts to natural systems.