Actuarial Review Return to Main Page

Quarterly Review
A Review of: Loss Models: From Data to Decisions
by Stuart Klugman, Harry Panjer, and Gordon Willmot (John Wiley & Sons, 1998)

by Glenn G. Meyers

In the late seventies, Bob Hogg was serving as an academic advisor to the committee that prepares Parts 1 and 2 of the actuarial examinations. As was typical of Bob, he took an active interest in possible applications of statistics and out of his interactions with the committee, he began writing a book called Loss Distributions. Back at the University of Iowa he enlisted the aid of Stuart Klugman, a Fellow of the Society of Actuaries teaching actuarial science, and together they completed the book that has been studied by many students in the Casualty Actuarial Society.

Loss Distributions was only the beginning of Stuart's contributions to actuarial science. He has also made significant contributions to credibility theory. Although not a member of our Society, he has been invited to speak at many CAS meetings and seminars. In addition he is serving as an academic advisor to the CAS Committee on the Theory of Risk.

Harry Panjer and Gordon Willmot, professors at the University of Waterloo, have played similar roles in the SOA affairs. Harry is well-known among research-oriented casualty actuaries for his work on the theory of risk.

Together, Stuart, Harry, and Gordon have written a book entitled Loss Models: From Data to Decisions, which covers loss distributions, credibility theory, and risk theory in a coherent manner. Because of their long involvement in the affairs of both the CAS and the SOA, they are well qualified to write such a book for an actuarial audience. Following modern marketing principles, they recruited the CAS Committee on the Theory of Risk as a focus group. The committee provided the authors with several real-world examples including a case study that is followed throughout the book. The book will be published in January 1998.

Following the introduction in the first chapter, chapter 2 deals with claim severity distributions. This chapter provides a fairly complete inventory of claim severity models, and gives a variety of methods for fitting these models to data. In addition, it provides methods of quantifying the uncertainty inherent in fitting these models to limited amounts of data. The chapter also provides applications of these models for analyzing the effects of limits and deductibles.

Chapter 3's focus on claim frequency distributions starts with the classic three distributions widely known to most actuaries: the Poisson, binomial, and negative binomial. It then goes on to introduce a whole new class of frequency distributions called the compound distributions. One way to think of these distributions is to consider a two-stage process where one picks a random number of "accidents" from one distribution and for each "accident" one picks a random number of "claims" from another distribution. The compound distribution describes the total number of "claims" generated by this process. The chapter then describes a recursive algorithm for calculating the probability of any given number of claims. It turns out that many of these distributions can also be described as mixtures.

In time, these compound distributions could become a significant addition to the modeling tools available to the actuarial profession.

Chapter 4 describes three main ways of calculating aggregate loss distributions in terms of the underlying frequency and severity distributions. Computer simulation is the easiest way to calculate aggregate probabilities. Its drawback is that it can take a great deal of computer time. This is becoming less of a problem as computers get faster, but actuaries are also getting more ambitious. As we move into dynamic financial analysis (DFA), simulators could well be asking for the services of Deep Blue.

The Panjer recursive algorithm is a very elegant way to calculate aggregate loss probabilities and it is very fast when the expected number of claims is small. One major drawback is that it does not handle multiple lines of insurance. The multiple line problem can be solved either by brute force convolutions or by the mathematically complex (in other words, magic) procedure know as Fourier inversion.

The Panjer recursive algorithm can handle compound frequency distributions, but it requires more computer time. Following a formula given in the book, I was able to use compound frequency distributions in a Fourier inversion method with a minimal increase in computer time.

Chapter 5's coverage of credibility theory includes classical credibility, Bayesian estimation, Buhlmann credibility, and empirical Bayesian credibility. The earlier sections provide a mathematically rigorous treatment of the material in our current Part 4B exam. I hope that at least some readers master the empirical Bayesian credibility material, which I feel is underutilized.

Chapter 6, which covers conventional ruin theory, takes simple models of loss generation and premium collection and attempts to solve the mathematically difficult problem of calculating the probability that an insurer will exhaust its surplus. This subject has held the attention of risk theorists for several decades and probably should be included in any text on risk theory. But I suspect that much of this will eventually be replaced by DFA.

My favorite part of the book is the collection of appendices, which will be very valuable to those who are charged with implementing this material. Here we have a reference for all the distributions discussed in the text, along with formulas for various quantities of interest such as the density functions, moments, limited moments, probability generating functions and the like. In addition, the notation is standardized, in other words, the same letters are used for the scale parameter, the shape parameter and so on. The appendices also have a bunch of goodies such as a formula for the incomplete gamma function, the simplex algorithm for maximizing functions and formulas for adjusting the frequency distributions when the severity distribution is affected by a deductible.

Wiley will also distribute software related to the book through its Web site. The programs included will be: 1) FIT, for fitting severity distributions; 2) DFIT, for fitting frequency distributions; 3) CR, for calculating aggregate probabilities using the Panjer recursive algorithm; and 4) a shareware version of CRIMCALC, for calculating aggregate probabilities by the Heckman/Meyers algorithm, a Fourier inversion method.

I believe this book will become a major text and reference for actuaries. All actuaries will benefit by mastering some of this material, and a large employer of actuaries should have someone on board who has mastered all of this material.