Actuarial Review Return to Main Page

Actuarial Roundtable Discussion


Reserving the Berquist-Sherman Way

by Arthur J. Schwartz

James Berquist and Richard Sherman wrote a classic paper on reserving in the 1977 Proceedings. Among many notable concepts, this paper introduced to the actuarial literature two interesting methods for adjusting loss development triangles. The incurred triangle is adjusted for changes in case reserve adequacy and the paid triangle is adjusted for changes in claim settlement rates. One year later, Joseph Thorne reviewed the Berquist-Sherman paper. The paper and the review are still required reading for the CAS Exam 6, which covers reserving.

Subsequently, in a 1988 Discussion Paper, Kirk Fleming and Jeffrey Mayer studied these methods and proposed several changes to improve their accuracy. A key point of their paper was that changes in claim settlement rates could give a misleading indication of case reserve adequacy.

Joining me in a discussion of the two methods are James Berquist, Richard Sherman, Joseph Thorne, and Jeffrey Mayer.

James Berquist is a retired actuary in Oceanside, California. His paper with Richard Sherman won the Dorweiler Prize in 1978. He has served three different terms on the CAS Board of Directors, most recently from 1985 to 1987, and has chaired or served on numerous CAS Committees, including the Committee on Reserves. He is the 2001 recipient of the Matthew Rodermund Service Award.

Richard Sherman is president of Richard Sherman and Associates, Inc., in Ashland, Oregon. He has participated in loss reserve studies for 28 of the nation's largest insurers. He has written papers on estimating the variability of loss reserves and extrapolating loss development factors. He jointly won the Dorweiler Prize with James Berquist in 1978. He has written 65 "Ask a Casualty Actuary" articles in Business Insurance over the past 16 years.

Joseph Thorne is a consulting actuary in Laguna Niguel, California. His review of the Berquist-Sherman paper appeared in the 1978 Proceedings.

Jeffrey Mayer is a senior vice president with AIG Risk Finance in New York City. He has been active with several committees, including the Long Range Planning Committee and the Advisory Committee on Valuation of P/C Insurance Companies.

Schwartz: Some comments on the incurred method first. A) Thorne points out that the method is sensitive to the percent trend shown and uses a trend derived from closed claims, yet is applied to a triangle with a mix of open and closed claims. B) The method assumes the trend is constant at all evaluation dates; that by using the same trend factor it could introduce a trend to the data that's not really there; that using a single de-trend factor will reduce the inherent variability in the data, and that the method supposes that the case reserves along the latest diagonal are adequate, which puts a disproportionate weight on the accuracy of the most recent diagonal. C) Mayer and Fleming point out that changes in claims settlement rates can give a misleading impression of case reserve inadequacy. What are your views on these comments?

Sherman: I would take issue with the word "adequate" in item "B" above. The method really adjusts to the adequacy level inherent in the latest diagonal. The triangle is adjusted to a constant level of adequacy, yet not necessarily "adequate."

Mayer: The goal is not to adjust to adequate reserves, but to the current level of reserve adequacy.

Thorne: Regarding item "A," in my 1978 discussion I stated on the first page, "For the most part, the methodology used in the model is designed for analysis of paid losses rather than incurred losses." I was just out of graduate school and in my first actuarial job; the statement seems a bit strong to me now. I use both adjustments where warranted. Yet three decades later I continue to believe the estimates from the methods adjusting paid losses are much more useful and less sensitive to assumptions than those adjusting incurred losses.

Sherman: I disagree somewhat. Both methods are vulnerable and have their own unique sensitivities. One key area is to get a good triangle of claim count data, on a consistent basis, and to understand any changes that have occurred in the claim reserving or handling process. Situations can happen with the claim count triangle, such as inclusion of trivial claims, or the inclusion of claims without payments, or shifts in the relative presence of either, that can throw the adjustment methods out of whack. For example, if trivial claims are a growing percentage of total counts, this can cause claims disposed ratios to increase, indicating an illusory claims speed up. Since the paper was written, I have mellowed, and I take the results of applying the techniques in the paper with a grain of salt.

Mayer: I agree with the point. Some actuaries view the mechanical adjustments as sacred. If done properly, the adjustment techniques will use insight. Look at the claim settlement rates. If it shows a speed up, then it's important to adjust paid dollars. In the Fleming-Mayer paper, we saw that reviewing the average outstanding levels should not be independent of settlement rates. So we looked at claims closed on a percent basis. You can also look at the triangle of average outstanding levels on a percent closed basis.

Berquist: Whether trivial claims are included in the claim count triangle can be important. We need to define the triangle of reported claims carefully.

Mayer: What you do not want to happen is the blind application of the adjustment techniques. Then applying the techniques becomes a blind exercise, and your analysis is in trouble. The assumptions behind the techniques are important. A good actuary applies the techniques but interprets the results with good judgment and a good understanding of the underlying claims process. A mechanical exercise can bring a computer to its knees, but we should not use technology as a substitute for knowing, really knowing, the underlying claims process.

Berquist: I recently reread our paper. I think we hit a few things dead on. You cannot do reserving mechanically. I'm reminded of a project on medical malpractice reserving in California. It turned out that the claim count was defined differently in the northern part of the state versus the southern part.

Mayer: In the back of the Berquist-Sherman paper is an appendix with a very detailed questionnaire. It covers a good deal of the questions that you need to ask before carrying out any reserving assignment.

Berquist: I agree one hundred percent!

Sherman: In the paper and in Fleming-Mayer, there's an assumption that all other things are equal. If there is a huge increase in the retention, and you're dealing with a net triangle, then adjusting for the adequacy of case reserves has another dimension. You must also adjust for changes in the retention.

Mayer: The data may indicate an increase in reserve adequacy, when all it really is is a change in retentions. You may need to de-trend the losses to index the retentions. So you have more than the simple inflationary trend in claim sizes.

Sherman: Sometimes changes in the mix of business written need to be reflected. There can be a change from low severity type of business to a higher severity type of business. That can also distort the adjustments.

Berquist: I see our paper as if it were a 1977 automobile. Things change. Technology has changed. One of the prime motivators for writing our paper was that the accountants were moving in on our area of expertise. We needed to show that reserving was not a "Simple Simon" exercise, that you could not simply extrapolate or develop a paid and incurred triangle. We wanted to show that actuarial methods and actuarial judgment were as important in setting reserves as they were in ratemaking, where the actuary's role was unquestioned.

Mayer: In item "C" above, the Fleming-Mayer paper really points out not so much that there's a change in case reserve adequacy, as that there has been a change in the perception of case reserve adequacy.

Schwartz: Now looking at the paid method: A) Thorne points out that the method supposes that there is a mathematical relationship between number of closed claims and loss payments. B) Thorne points out that the method may need to be adjusted to recognize settlement patterns by size of loss. This seems especially valid for long tail lines where increasing size claims have become more common in recent years. C) The method supposes that the relation between closed claims and loss dollars paid is captured well by an exponential formula. Each of these three hypotheses is only tested empirically. What are your views on these comments?

Thorne: Jim's comments are right on the mark. Accountants were moving into reserving. There was a temptation to treat reserving as if it were a black box. Even the articles on the syllabus in the 1970's such as Stern on automobile ratemaking, Marshall and Kallop on workers compensation ratemaking and Salzmann's reserving chapter in the IASA text were mostly at the "Triangles 1A" course level for reserving. I benefited greatly from each of these articles. However, the loss methods were primarily focused on how to select development factors from a variety of triangles. This was particularly true for ratemaking articles. The triangle was what you had left from ratemaking articles when you tried to estimate reserves. Triangles became the whole basis for the reserving methods of the day.

I believe empirical evidence supports the notion that the Berquist-Sherman paper must have been a quantum leap over the literature of the day; namely, it is still on the syllabus three decades later. However, around the same time there was a paper by Bob Finger regarding pure premium by layer of loss. That paper was one motivation for my consideration of shifts in losses by size of loss and for writing the 1978 discussion of the Berquist-Sherman paper. I actually felt a need to pull out my college statistics book while reading Bob's article. The Berquist-Sherman paper took us to "Triangles 101A," but I did not need my college statistics book while reading it.

Three decades later, I continue to find little need for my college statistics book when reviewing actual reserve studies of property/casualty actuaries. Our love affair with triangles continues. The magnitude of distortion in reserve estimates due to shifts in size of loss distributions can be huge! In a related recent case involving a very large client company, the actual reserve estimate with hindsight was double the unadjusted estimates made at the time.

Sherman: I've seen that as well.

Thorne: It should be noted though that over 90 percent of the time the adjustments in Berquist-Sherman are not needed. For the other 10 percent, the adjustments can be critical. This potentially critical effect invites at least testing for these types of shifts.

Sherman: Few insurers even have a good history of size of loss data. That's surprising but true.

Thorne: Thirty years later we ought to be seeing better data. It is interesting to note that corporate clients often do have the size of loss data that insurers lack. However, most commercial insurers have been submitting their workers compensation loss data to HNC Software, Inc. for a decade or more. The interest of the insurers in submitting quality data to HNC is primarily due to their desire to get back reliable estimates from the artificial intelligence statistics methods of HNC's MIRA software. The MIRA estimates can aid their claims adjusters in setting case reserves. HNC data is a source of claims by size of loss for workers compensation.

Sherman: In the paper, we used an exponential curve to relate paid claim counts to paid loss dollars. Over the years, this curve has proven to be a pretty good choice.

Thorne: There may be a cause and effect relation between that exponential assumption and the underlying experience. Smaller claims tend to close out sooner, which leads to an exponential formula. The ultimate test though is to step back and make the empirical observation that the graph sure does look exponential.

Berquist: We agree on the exponential formula. We [Sherman, Thorne, and Berquist] worked together on a large number of reserve studies.

Thorne: That environment, which Jim created, was so different from today. You did not have the current level of competition among major consulting firms. We worked on a large spectrum of different situations. That spectrum included very large to very small insurers as well as more volatile lines of business such as medical malpractice and workers compensation to the very stable automobile property damage coverage. The questions in the appendix benefited from this tremendous variety of situations, as each situation was an actual wrinkle that had been encountered in practice during the 1970's. Many of them continue to be encountered today.

Sherman: I'm glad that Joe [Thorne] wrote that review of the paper. There's a tendency to blindly apply methods without serious regard for possible problems. Applying the adjustment for a change in case reserve adequacy is only as valid as the consistency of the count data. There also needs to be a review of changes in claim count data or other shifts that are going on.

Schwartz: Since the time each of you wrote your papers, have you come across any improvements in the two methods or simply better techniques for making the indicated adjustments for case reserve adequacy and for claim settlement rates? Also, are there any special situationsfor example, changes in retention levelsin which either method may be misleading without further testing? (Also, I want to clarify the Fleming-Mayer approach. A key point of the paper is that an increase in the outstanding losses may seem to show reserve strengthening. Instead it may simply reflect a speed up in claims settlement rates.)

Thorne: How does the Fleming-Mayer approach differ from the hindsight outstanding severity method?

Mayer: First, start with adjusting the paid triangle as in Berquist-Sherman, with all caveats noted above. Second, take the original incurred loss and reported count triangles and subtract the paid loss and paid claim count triangles. Third, look at the trends in average case outstanding losses. Look at the trend at various percentiles of settled claims. The trend rate should be the weighted trend rate of closed claims.

Thorne: An advantage of using the ultimate is that it's not as dependent on the case reserving philosophy. A disadvantage of the hindsight average outstanding method is that it's like playing claims adjuster without the detailed knowledge the adjuster has of the actual claims.

Sherman: The Fleming-Mayer paper proposes a helpful enhancement to the Berquist-Sherman adjustment methods. I'm glad this paper is part of the actuarial literature.

Thorne: You will need to look carefully at trends in closed claims.

Mayer: If the data is too volatile, then you need to pick a trend rate. The Fleming-Mayer approach makes sense for the most recent three to four years. At some point, for older years, you should just use traditional methods.

Thorne: A weighted average trend factor is based on all claims not yet closed, including late reported IBNR claims. The average in the paper is applicable more to difference of the ultimate losses minus paid losses since that difference includes IBNR claims. The reported losses used in the paper do not include IBNR claims.

Mayer: Exactly.

Thorne: Over three decades since the paper was written, we have so many better techniques available. We have seen vast improvements in part due to the personal computer revolution.

One key transformation is data. Over the past ten years, corporations such as HNC have been accumulating individual workers compensation claim data for most large commercial insurers in the nation. The WCRI [Workers Compensation Research Institute] in California and the NCCI [National Council on Compensation Insurance] are currently working on a similar workers compensation database designed to provide such data to support rate filings by better understanding the underlying forces driving rates. This kind of data is crucial to understanding what I call the "whys behind the numbers." It is much easier to find an actuarial student (or senior actuary) who can play with the numbers than one who will get into changes in the underlying claims environment or changes in insurance company operations. From my experience the ones who can do more than "play with the numbers" seem to rise to the higher paid positions (take note, actuarial students).

The other area where I have seen progress is methods. Methods that were at best cumbersome in the 1970's are much more accessible now. There's some very powerful software available to run on a PC. For example, there is @Risk from Palisades Software. Just with a PC you can fit dozens of distributions to the size of loss data and obtain related statistics for evaluation of those fits. A second example is Stata. This software is a standard among Ph.D. statisticians working in econometrics and multilinear regression. I have applied size of loss distribution fits and multilinear regression concurrently for some very useful results—adjusting for changes in claims closure rates, adjusting for changes in case reserve adequacy, and identifying "red flags." For example, the mean or coefficient of variation of the fitted size of loss distributions at various levels of development in the triangle can be a measure for severity or frequency. Whether it is the mean or coefficient of variation breaking out of historical trends at some common point of development, that is a red flag to me. A PC can now combine distribution theory and econometrics where you may not even want to look at the triangle except for purposes of reconciliation to higher macro levels of loss summarization.

However, Sherman made a very good point earlier. The size of loss data often does not exist to apply these methods—especially among commercial insurers. I have not found this much of a limitation since in the later part of my career I have worked substantially with medium-to-large corporations. They or their third party administrators (claims adjusting firms) seem to have systems that can provide such claim data by size of loss. As noted above, I have also found such methods more available for workers compensation than other coverages. Perhaps that is due to the statutory nature of the business.

We should also remember that the availability of size of loss data and these methods requiring reference to our college statistics books do not imply their immediate application. That is even true if we are working in the traditional triangles environment of the Berquist-Sherman adjustment methods. As noted earlier, over 90 percent of reserve studies do not require any of these adjustments. The adjustments could be a waste of the actuary's time and the client's money.

Sherman: In many situations, however, it is a fact that there are no major changes in the reserving environment and no clear need to apply the techniques. You see randomness, yet no real trend.

Thorne: Many reserve reports that I have reviewed do not address whether they have tested for such changes [in claims settlement rates or case adequacy]. I'd like to think that three decades later it would be standard to test for those changes and note it in the reserve report or at least in the work papers. This testing and documentation would be appropriate even if the adjustment methods end up not being applied. I believe that an actuarial student can fall in love with triangles and do quite well in a career "squaring triangles." One can just apply Triangles 1A methods without even going to Triangles 101A or beyond. What a pity, though.

Mayer: If you put in some hard and honest work in getting answers to the questions, in asking underwriters about the book of business, in asking about the retentions by line of business, then the technology we have today is terrific. It only works if you do the hard work, though.

Schwartz: Thanks for a great discussion!