A Healthy Skepticism Toward Models
By Kevin M. Madigan
Variability is a phenomenon in the physical world to be measured, analyzed, and, where appropriate, explained. By contrast uncertainty is an aspect of knowledge.
—Sir David Cox
Like many actuaries, I have spent a large part of my career designing, building, or using the results of complicated models, in particular, those designed to assist decision making in the face of uncertainty. Current events in the regulatory and rating agency realms, as well as advances in technology, guarantee that these sorts of models will continue to play an important role in the property-casualty insurance sector. No aspect of the actuarial profession appears exempt from sophisticated models. Whether an actuary is involved in ratemaking (predictive analytics), loss reserving (stochastic reserving, probabilistic reserve models) or ERM, models are playing a leading role. As models become more ubiquitous, the understanding of their proper role—and the level of healthy skepticism towards their results—is decreasing.
In an April 14, 2011, interview with Insurance Journal, Karen Clark, the founder of the first catastrophe modeling company, said the following: [C]ompanies need to use other credible information to get more insight into their risks...[t]hey need to be skeptical of the numbers. They need to question the numbers. And they should not even use the numbers out of a model if they don’t look right, or if they have just changed by 100 percent...models are very general tools…these are not surgical instruments.
Models are general tools, not precision tools like surgical instruments. Models do not provide answers, they provide information. Many astute readers may have surmised that Ms. Clark was discussing recent changes in one of the commercially available property-catastrophe models. It is important to note that Ms. Clark is not opposed to models but is ringing the alarm that people are using models they do not understand. Further, she is also saying that models—all models—make assumptions that may or may not be correct, or that are only correct in certain circumstances.
One very important modeling issue with which I suspect most of us are already familiar is the distinction between probabilities in the classical “frequentist” sense (e.g., flipping coins, performing repeatable controlled experiments) and the Bayesian sense (e.g., claims propensities, hurricane landfalls, and future interest rates). Unfortunately, many users (and many builders!) of models do not understand this distinction; they interpret the results of models that rely on subjective Bayesian probability assumptions as facts instead of as the indicators that they are. There are many other assumptions to which a model can be extremely sensitive, and it is important for the users of the model output to understand the degree of sensitivity. Furthermore, one must be highly skeptical of model results that claim high degrees of precision (what I like to call “delusional exactitude”).
In short, the key is to choose a model that is suitable for the decision being made, and to understand its assumptions and limitations before using it and while interpreting its results. One will never find the “correct” model. The law of diminishing returns will eventually kick in—pick one that you understand and that addresses the question and move on.
I have very mixed feelings about the new ERM regulations being proposed, both in the U.S. and abroad. On the one hand, I think the National Association of Insurance Commissioners’ (NAIC) proposed Own Risk and Solvency Assessment (ORSA) regulatory framework has the potential to be an extremely positive development. The ORSA framework will require (re)insurers to report certain details of their ERM programs, and to provide an internal assessment of their prospective solvency position— normal and stressed scenarios. I strongly feel that if (re)insurers and regulators approach this new regulatory requirement with care and foresight, it will be a positive evolutionary milestone in insurance regulation. However, if the proper care is not taken in the implementation of this new ERM regulatory paradigm, we could witness an increase in the reliance on computer models by that subset of decision makers who do not understand them and have perverse incentives to treat them as black boxes. As I have alluded to above, we have already lived through this once in the realm of property-catastrophe models. It is imperative that the actuarial profession work to make sure that this does not happen with other ERM models, particularly with economic capital and solvency capital models.
As actuaries, we are in the unique position of understanding the models and the questions they are designed to address. We have the tools, education, and experience to design and build them, to understand those built by others, and to understand their role in decision making. We can, if we so desire, play a very important role in limiting their misuse. Actuaries are the professionals best equipped to tell other decision makers which models are appropriate for which situations, and how much credibility to give to model output. As a profession we must enthusiastically embrace this role.
Several good resources in the actuarial literature address this topic. They are of a general interest, even if one does not regularly engage in the actuarial activities they discuss. In particular, I highly recommend two documents by the International Actuarial Association: “Comprehensive Actuarial Risk Evaluation” (May 2010) and “Note on the use of Internal Models for Risk and Capital Management Purposes by Insurers” (November 2010).
I conclude paraphrasing some excerpts from the blog post “The Financial Modelers’ Manifesto” by Emanuel Derman and Paul Wilmott.
- Models are at bottom tools for approximate thinking, but the world is not as simple as our models.
- One cannot think about finance and economics without models and mathematics, but one must never forget that models are not the world.
- Whenever we make a model of something involving human beings, we are trying to force the ugly stepsister’s foot into Cinderella’s pretty glass slipper. It doesn’t fit without cutting off some essential parts. In cutting off parts for the sake of beauty and precision, models inevitably mask the true risk rather than exposing it.
- The most important question about any financial model is how wrong it is likely to be, and how useful it is despite its assumptions.
- You must start with models and then overlay them with common sense and experience (emphasis added).
And finally, also from “The Financial Modelers’ Manifesto”:
The Modelers’ Hippocratic Oath
- I will remember that I didn’t make the world, and it doesn't satisfy my equations.
- Though I will use models boldly to estimate value, I will not be overly impressed by mathematics.
- I will never sacrifice reality for elegance without explaining why I have done so.
- Nor will I give the people who use my model false comfort about its accuracy. Instead, I will make explicit its assumptions and oversights.
- I understand that my work may have enormous effects on society and the economy, many of them beyond my comprehension.
Kevin M. Madigan, Ph.D., ACAS, MAAA, ARIAS-U.S. Certified Arbitrator is a director at PricewaterhouseCoopers LLP.