Click here to download a .pdf version of this newsletter.

Return to Main Page


From the Readers
From the Readers

The Case For Stochastic Reserving
Dear Editor:
In his 2010 “Random Sampler column” (“The Case Against Stochastic Reserving,” AR, August 2010), Ralph Blanchard criticizes stochastic loss reserve models because they overlook information that can be used to develop a more accurate reserve. This is an interesting (and widely held) theory, but I have seen no evidence that this theory works in practice. The claim settlement process is complex and messy. The enormous number of facts that one can consider, and the high financial stakes riding on loss reserve, make the situation ripe for what psychologists call “confirmation bias.” Confirmation bias refers to a human tendency to give more weight to information that supports their preconceptions. Despite some claims to the contrary, my experience with actuaries is that they are very human.

I addressed this question in a 2007 Variance paper titled, “Estimating Predictive Distributions for Loss Reserve Models.” In Section 8 of that paper I did a retrospective test that compared the predictions of a stochastic model with the posted reserve for 109 insurers. In that test, the stochastic model did significantly better than the posted reserve. In the same vein, Mike Wacek came to a similar conclusion in his paper, “A Test of Clinical Judgment vs. Statistical Prediction in Loss Reserving for Commercial Auto Liability” (CAS Forum, Winter 2007). Although the actuarial role in posting the reserve varies by insurer, those with access to all the information have not been able to post a more accurate reserve. While two studies such as these should not be viewed as conclusive, I submit that the stochastic models are ahead.   

I do agree with Ralph that predictive models can be used to “identify the drivers of uncertainty.” But we should keep in mind that most of the predictive models used by actuaries, such as the GLM, rely on stochastic assumptions. As I illustrated in my Brainstorms column in the same issue of the Actuarial Review, an understanding of the stochastic nature of the data can lead to more accurate point estimates. In the effort to identify the drivers of uncertainty, it is important to distinguish the noise from the signal. I think the best way to identify the drivers is to estimate the predictive distribution of a future statistic of interest, such as the total payments, reflecting both parameter and process risk, for the next calendar year. When the future comes, we should do a retrospective test on that statistic. If the statistic is far out in the tail, one should look for systematic departures from the model assumptions, maybe a black swan, or investigate the underlying model.   

I do have problems with the way stochastic models are being used. First, posting a range for the reserve, and then allowing other (usually opaque) considerations to determine where to post the point estimate within the range, is not the way to go. Instead, the presence of a wide reserve range should either indicate the need for increasing capital, or putting a larger risk margin on the reserve. Solvency II requires the latter. My second problem is that we actuaries are not very good at stochastic modeling for loss reserves. I suspect this is the source of the widespread frustration with stochastic models. While there is a lot of good work going on, we will not get good at stochastic loss reserving until we have gained considerable experience with retrospectively testing our predictions.   

In summary, we live in a world where the future is uncertain. We need to develop the best tools to quantify that uncertainty and develop better procedures for dealing with that uncertainty. I submit that the stochastic models are the best tools we now have available, but further development with retrospective testing is necessary.   
    —Glenn G. Meyers, FCAS

Dear Editor:
As I read this piece, I found myself nodding in agreement, despite the fact that I spent much of my 36-year career developing and using both stochastic and predictive models in my work.

I concur with Ralph that we have to be wary of our models, which are often constructed from limited data and unproved assumptions. It is here where I think it is wise, if not imperative, to follow the actual results versus the results of such models to see (a) how they worked (or failed!) and (b) try to see the extent to which actual differences are “random” (i.e., not found to have a nonrandom explanation).

In short, I agree with Mr. Blanchard’s essay. I do, however, think the title is somewhat misleading, because I don't believe the essay is “against” stochastic reserving but is, rather, recognizing potentially major pitfalls in its use and giving such models blind obedience.
—Brad Gile, FSA, MAAA, AFFI

Another View on Point Estimates

Dear Editor:
I would like to comment on Glenn Meyers’ “Brainstorms” column (“Point Estimates,” AR, August 2010). In his first example, for a sample from a LogNormal Distribution, Glenn compares the sample mean and maximum Iikelihood estimator of the mean. Although it was not the main thrust of his article, his results do depend very significantly on the sample size. Maximum likelihood estimators are asymptotically unbiased and asymptotically efficient. Thus they perform well for large sample sizes. However, maximum likelihood estimators may perform relatively poorly for small sample sizes. For samples from a LogNormal, one can determine the bias and variance of his two estimators in closed form. I have included the mathematics in an appendix. In his example, Glenn simulates a sample of size 1000 from a LogNormal Distribution with µ = 6 and ? = 2. For this case, for various sample sizes, below are listed the bias, standard deviation, and root mean squared error:1         
Chart
   


For this example, for samples of size less than 17, maximum likelihood has a larger root mean squared error than the sample mean. For samples of size more than 16, maximum likelihood has a smaller root mean squared error than the sample mean.2
—Howard Mahler, FCAS

1 Mean Squared Error = Variance + Bias2. The Root Mean Squared Error is the square root of the Mean Squared Error.

2 The crossover point depends on the sigma parameter of the LogNormal Distribution.
The larger the sigma, the higher the crossover point.
For sigma=1, for samples of size less than 12, maximum likelihood has a larger root mean squared error than the sample mean.
For sigma=3, for samples of size less than 27, maximum likelihood has a larger root mean squared error than the sample mean.

Click here to write a Letter to the Editors

Copyright © 2014 Casualty Actuarial Society. All Rights Reserved.