Stephen W. Philbrick
I received an e-mail recently from Dave Clark with some thoughts on an actuarial issue. The issue has a mouthful of a titleInfinitely Decomposable Loss Development Patternsbut the underlying concept is more basic than that title would suggest.
We tend to confront two actuarial questions with two different approaches. When given loss data by size of claim, and a need to project the future distribution of amounts by size, we tend to fit a statistical distribution to the results. However, when we are given emergence data of claims, whether counts, payments or incurred values, we are more apt to calculate empirical ratios and apply those ratios to the future. I asked Tom McIntyre why we use an empirical approach to development factors in light of the distributional approach to size of loss problems, and his answer was, "Because it works."
I suspect he is right, but I also think there is value in a more statistical approach. Dave Clark is expanding on some work done by Robbin and Homer, (www.casact.org/pubs/dpp/dpp88/88dpp501.pdf), which in turn is based upon some thoughts I had discussed in the 1986 Discussion Paper Program (www.casact.org/pubs/dpp/dpp86/86dpp116.pdf).
In brief, the underlying concept is that we should analyze the emergence, whether claim counts, claim payments, or claim incurred amounts, with respect to a single point in time. Our normal approach is to look at the empirical development of a bundle of claims, typically from a 12-month period such as an accident year. We need to abstract twice: first, from the actual claims for the period of time to the expected claims for that period of time. Second, rather than analyzing an entire period, in which some claims are older than others, we need to examine a point in timethe expected claims coming from an arbitrarily small interval of time. If we can specify a statistical form for the emergence pattern of this point in time, we can calculate emergence patterns on a variety of bases from that building block. For example, the ubiquitous accident-year patterns would consist of an average (integral) of the point-in-time pattern over a one-year period. Integrating over three months would produce accident quarter factors. Policy year factors consistent with the accident year factors would involve a slightly more complicated integral, one reflecting the parallelogram of exposure over the 24-month period. Reinsurers could calculate underwriting year patterns by integrating over a 36-month period. This approach is particularly well suited for calculation of incomplete period factors, such as six- or nine-month accident factors. Finally, for companies with unusual exposure growth, either positive or negative, emergence factors could be calculated by integrating over the period reflecting the expected exposures at each point in time.
One stumbling block I ran into was the identification of statistical distributions that fit the underlying data reasonably well and could be easily integrated. Dave Clark has identified a special case of a beta distribution that may be suitable. The mathematics are beyond what I like to include in this column, but Dave Clark would be happy to share with you the preliminary work he has completed. He can be reached at email@example.com.
I am sure the traditional approach will continue to be the approach of choice when one has decent amount of data in a particular format (such as accident year) and the need to estimate factors on the same basis. The alternative approach is best suited to situations where some conversion is neededthat is, data is available in one format, such as accident year, but factors are needed in a different format, such as policy quarter, or incomplete-year factors are needed. However, if we become adept at identifying patterns for point-in-time distributions for those cases, we might decide to use this approach more generally.