Estimating trend is one of the most basic calculations in actuarial science. So when I was recently asked to estimate trend for a line of business, I was not concerned. I resolved to solve it quickly, and then move on to a more "challenging" assignment.
The line of business was umbrella, so, by definition, all of the claims were excess of retention, but this seemed a minor complication. The relationship between excess trend and ground-up trend was straightforward, especially if there is no reason to assume trend varies by layer. Moreover, the request was for an estimate of the ground-up trend, as the adjustment to excess trend would be part of the pricing tool.
Pulling together the raw data was a bit harder than I anticipated. I'm reminded of the old lame joke about the person who inquired about a room for the night and was told that one was available if they were willing to make their own bed. After answering in the affirmative, the person is handed a hammer and saw. I asked where to find the database with umbrella claims, and learned that creating the database was the first part of the assignment.
Once I had average claim data, I automatically started looking at the changes over time, and quickly saw why the problem wasn't quite as simple as I first envisioned. I mechanically calculated the changes in average claim size over time. The numbers seemed to be all over the place so I graphed the results, and saw what looked like a random number generator. After thinking it through, I realized I was essentially looking at a graph of noise, not signal.
|I'm reminded of the old lame joke about the person who inquired about a room for the night and was told that one was available if they were willing to make their own bed. After answering in the affirmative, the person is handed a hammer and saw. |
Consider a set of claims in one particular year, all of which exceed some threshold, say, $100,000. Those exact claims cannot be viewed in the subsequent year. Let's pretend for a moment, that we can take those exact same claims, and observe them with one year's worth of trend. If trend is five percent, then every one of them will be exactly five percent higher than the year before. But that does not mean the average claim size will go up five percent. A handful of claims would have been just below the threshold in the first year, but now trend just above the threshold. We could solve this by using a trended threshold, except that requires us to know the trend, the very factor we are trying to estimate.
The ironic solution is that severity trend can be measured by studying frequency. When thinking about excess trend, understanding the number claims just below the excess threshold can be as important as measuring severity growth existing on the current excess claims. Trend pushes claims just below the threshold in one year, to just above the threshold in subsequent years. The expected increase in claim counts can be calculated easily, if one is able to specify a severity distribution. The actuary has to take care to keep track of an exposure measure—obviously the counts in a year can increase if more business is written, but if the exposure adjustment is included, the interesting result is that severity trend can be estimated by looking at the change in frequency of claims in excess of a threshold.
Interestingly, if the severity distribution is exactly Pareto (of the single parameter variety), the expected excess claim about any fixed threshold is unchanged by trend. As a special case, this also applies to any layer of fixed width. Under these circumstances, any observed changes in severity don't simply look like noise, they are noise. It is reasonable to question whether a single parameter Pareto applies to unlimited layers above a threshold, but most analyses will restrict the study to a modest range, whether because of data limitations, or policy limit considerations. In either case, the variability of empirical severities is mostly noise. (Assumptions of other loss distributions, such as lognormal, does little to solve the problem. One can calculate the implied expected changes in severity, but the results are not robust. That is, the expected changes are tiny for reasonable parameters, and will be swamped by noise.)
For those interested in a specific formula, in the case of a single parameter Pareto with parameter q, severity trend of i percent per year produces a frequency trend of (1+i)q.