Portfolio Decomposition: Modeling Aggregate Loss (Ratio) Distributions

Abstract
There are two things that may be responsible for differences between the expected loss amount (for a contract or an entire portfolio) and the actual loss amount that is experienced: errors in estimating the long term average (parameter error) and random good or bad luck (process risk). This paper presents a method for using historical data to establish a model for process risk. Because the method does not require individual claim data, it is especially suitable for reinsurance companies for whom individual claim data may not be available. It can also be used when data is obtained from the aggregate policy year and accident year calls that are filed with rating bureaus. Essentially, the method treats the experience of multiple contract years as if each year were a random sample drawn from a single population consisting of all the outcomes that could have occurred, The techniques of Time Series Decomposition are used to restate the historical data on an "'as if current levels" basis. Decomposition is then used to isolate the random fluctuations (process variance). A generalization of the Central Limit Theorem allows a model of these fluctuations to be constructed. While derived from aggregate portfolio experience, the model's divisibility property allows it to be scaled down to accurately reflect the aggregate loss distribution of an individual contract or policy,
Volume
Summer
Page
105-176
Year
2000
Categories
Actuarial Applications and Methodologies
Ratemaking
Loss-Sensitive Features
Financial and Statistical Methods
Statistical Models and Methods
Regression
Financial and Statistical Methods
Statistical Models and Methods
Time Series
Financial and Statistical Methods
Aggregation Methods
Financial and Statistical Methods
Loss Distributions
Business Areas
Reinsurance
Publications
Casualty Actuarial Society E-Forum
Authors
Robert K Bender