CAS E-Forum Archive

Viewing 1 to 25 of 139 results
In this paper we will describe a Bayesian model for excess of loss reinsurance pricing which has many advantages over existing methods. The model is currently used in production for multiple lines of business at one of the world’s largest reinsurers. This model treats frequency and severity separately. In estimating ultimate frequency, the model analyzes nominal claim count data jointly against uncertain ultimate frequency and development pattern priors, allowing for more careful analysis of sparse claim count information and properly differentiating between triangulated and last diagonal data. The severity model is pragmatic, yet accounts for severity distribution development and weighs the volume of data against prior distributions. The model is programmed in R and Stan, thus eliminating the need for a considerable amount of algebra and calculus and the necessity to use conjugate prior distribution families. We compare this method with the more established Buhlmann-Straub credibility application to excess of loss pricing (for instance in Cockroft), and the more complex model given by Mildenhall, showing numerous advantages of our method.
This paper will examine the issue of social inflation trend in Medical Malpractice indemnity payments. I will demonstrate how a Calendar Year (CY) paid severity trend model can be used to identify and project social inflation. Using publicly available data, I will show how this line of business exhibits a cyclical stair step pattern which is the cornerstone of what I call the Level Shift model. I will then propose a method for calculating the severity trend factors for both claims made and occurrence policies based on this Level Shift model. The generalized model can also be used to incorporate shifts in both frequency and severity trend patterns due to external “Key Events” that change the loss landscape for this line of business. I will discuss how the jurisdiction and legal environment in which a claim is made can affect the loss cost trends and how the effect of “Key Events” that result from changes in these environments can be projected using the Level Shift model. Finally, I will demonstrate how the Level Shift model can help shed light on the unexpected loss development (adverse development) actuaries see in the Medical Malpractice line of business.
We use foundational actuarial concepts to build a full stochastic model of casualty catastrophe (cat) risk.
This paper analyzes the existing car insurance ratemaking system in Saudi Arabia, with the goal of assessing its readiness for novice women drivers allowed to drive only four years ago (starting June 2018). Saudi Arabia has an a posteriori ratemaking system to set the premium for the next contract period based on the policyholder’s claim history. Concretely, it rewards drivers with no claims, and therefore, it is often referred to as a non-claim discount (NCD) system. We set to analyze the system’s efficiency by transforming the current rules established by the Saudi Arabian Monetary Agency into a 6-class premium rate system. Since Malaysia and Brazil have ratemaking systems with a similar number of classes, we compare the efficiency of the three systems. By evaluating convergence to steady-state, average premium level charged, and elasticity, we conclude that the Saudi Arabian one is the slowest to reach a steady-state, charges the highest average premium to new policyholders relative to drivers with a long history in the system, but is relatively elastic. All of these lead us to conclude that the current Saudi NCD system is financially imbalanced and does not achieve fairness (quickly) among the insured drivers, an essential feature for the new generation of women drivers. Furthermore, based on claim data from Allied Cooperative Insurance Group, we conclude that there is still insufficient information to fully describe the driving pattern of these novice drivers. Hence, we consider some possible scenarios/behaviors for the new generation of women drivers, which confirm our conclusion.
Best in class actuarial departments not only provide quality actuarial work products, they are also able to communicate useful information to business leaders to enable better business decisions. It is essential to communicate so that the business leaders understand the information conveyed and are able to take action based upon it. Simply providing information is not sufficient, the best actuarial departments make sure that the business leaders understand the message and take action based on the message.
Many actuarial tasks, such as analysis of pure premiums by amount of insurance, require an analysis of data that is split among successive “buckets” along a line. Often, there is also significant randomness in the data. That results in process error volatility that affects the (usually average) values of the data within the buckets, so some smoothing of these values is needed if they are to be truly useful. The “ghost trend” approach allows for a high-quality smoothing of those values. Therefore, it helps to produce smoothed values that are more useful relativity factors, loss distributions for pricing aggregate losses, etc. An enhanced approach, integrating the ghost trend approach with other smoothing approaches is also provided. That composite approach provides additional flexibility in dealing with large datasets and datasets that are greatly affected by random differences from point to point.
P&C insurers who provide coverages subject to premium audit are exposed to elevated uncertainty in the amount and timing of future revenue and cash flow. Audit premiums can vary significantly from expectations when the path of the economy is uncertain or rapidly changing.

A recent survey of several commercial insurers by the CAS Reserves Working Group indicates that companies use actuaries to help forecast audit premiums. Audit premiums influence the revenue of insurance companies through the change in the Earned but Unbilled (EBUB) or Earned but Not Reported (EBNR) premium reserve. They also impact measures of claim frequency. A shift in audit premiums can be meaningful to the financial statements of P&C insurers. Despite this, to the best of the author’s knowledge, there is no actuarial literature to-date focusing on the topic.

This paper provides an overview of premium audit, presents the survey results, and highlights one approach for forecasting audit premiums. The 2020 COVID-19 recession is explored as a case study for the impact a rapidly changing economy can have on premium audits. In addition, two illustrations highlight the impact of premium audit forecasts on measures of claim frequency.
Generalized Linear Models (GLM) have become an insurance industry standard for classification ratemaking. However, some of the technical language used in explaining what a GLM is doing in its calculation can be obscure and intimidating to those not familiar with the tool. This paper will describe the central concept of GLM in terms of the estimating equations being solved; allowing the model to be interpreted as a set of weighted averages. The inclusion of prior information (in the Bayesian sense) follows naturally.
Machine learning applications for actuarial science is an increasingly popular subject. Notably, in the field of actuarial pricing, machine learning has been an avenue to higher predictive power for anticipating future claims. Insurers are now experimenting with these algorithms but are coming up against issues of model explainability and implementation costs.

The existing literature has begun to scratch the surface of this deep field of research. Fujita et al. (2020) published an experiment comparing the performance of predictive frequency models using AGLM, GLM, GAM, and GBM. König and Loser (2020) performed a similar exercise, except they compared performance of GLMs, neural networks, and XGBoost in predicting frequency. We sought to add to existing research by predicting pure premium instead of only frequency or severity, comparing four different approaches, and assessing and presenting many different quantitative and qualitative performance evaluation metrics.

This paper compares four different models for prediction of pure premium: generalized linear models (GLM), accurate GLM (AGLM), eXtreme gradient boosting (XGBoost), and neural networks. The research explores the quantitative and qualitative performance of the models on a test set of data as well as the pros and cons of each model in a P&C insurance pricing context.
This paper presents a new conceptual framework for measuring insurance profitability. The idea is to model the sequence of results that would be obtained by writing the business under consideration year after year. Each year premium stays the same, but losses are randomly sampled from a fixed loss distribution. Central to the framework is a capital management structure. This governs the payment of profit-sharing dividends to shareholders and it also includes the selection of a floor and a ceiling that constrain the range of surplus for the company. The company pays shareholders any excess surplus above the ceiling. On the other hand, if surplus falls below the floor, the company is liquidated. If results are extremely bad, the company goes bankrupt. When there is a bankruptcy, the investors do not need to make up the shortfall. The sequence of random loss results leads to a sequence of flows, called equity flows, to and from the investors. The sequence of surplus values is a Markov chain. The duration of any sequence depends on the loss results and the capital management parameters. The profitability of any multi-year sequence is measured as the internal rate of return (IRR) on those equity flows. Many sequences are randomly simulated to produce a distribution of multi-year shareholder returns.

The paper uses this paradigm to provide a fresh perspective for assessing the potential benefit to shareholders from the purchase of reinsurance by the insurance company. Conventional single-year approaches are implicitly skeptical of reinsurance. It cedes money that could otherwise boost shareholder profit and it also reduces the value of the shareholder’s insolvency put option. The multi-year sequence perspective reveals conditions under which reinsurance can boost shareholder return and reduce volatility. The paper concludes with sensitivity analysis of several capital management and reinsurance parameters and qualitative discussion on how to optimize profitability from this long-term return perspective.
This article presents several actuarial applications of categorical embedding in the context of nonlife insurance risk classification. In nonlife insurance, many rating factors are naturally categorical and often the categorical variables have a large number of levels. The high cardinality of categorical rating variables presents challenges in the implementation of traditional actuarial methods. Categorical embedding that is proposed in the machine learning literature for handling categorical variables has recently received attention in actuarial studies. The method is inspired by the neural network language models for learning text data and maps a categorical variable into a real-valued representation in the Euclidean space. Using a property insurance claims data set, we demonstrate the use of categorical embedding in three applications. The first shows how embeddings are used to construct rating classes and calculate rating relativities for a single insurance risk. The second concerns predictive modeling for multivariate insurance risks and emphasizes the effects of dependence on tail risks. The third focuses on pricing new products where transfer learning is used to gather knowledge from existing products.
In this paper, excess severity behaviors of Pareto-Exponential and Pareto-Gamma mixture are examined. Mathematical derivations are used to prove certain properties, while numerical integral computations are used to illustrate results.

The excess severity function of Pareto-Exponential mixture is shown to be increasing and concave; while for Pareto-Gamma mixture, the excess severity function is shown to be increasing, but could be either convex or concave.
Autonomous driving technology has made significant progress in the U.S. in recent years. Several companies have rolled out robotaxi and driverless delivery in many cities. Autonomous driving has created a unique and interesting challenge for actuaries to assess and estimate potential on-road liability exposure for insurance pricing and other purposes. Limited experience, lack of consistent regulations among states, and evolving technology are among the issues that actuaries have to deal with in assessing on-road liability exposures. This paper provides an overview of on-road liability exposure for Autonomous Vehicle operations on public roads and a framework to assist actuaries in their efforts to quantify potential liability losses in different markets.
People tend to express strong dislike for default risk in their insurance coverage. Survey participants demand premium reductions of over 20% for a 1% risk of default.

In this paper we suggest that a similar dynamic is also evident in insurers’ reinsurance purchases. Insurers are willing to pay significantly more for catastrophe (Cat) bond protection where the risk of default is minimal compared to traditional reinsurance coverage. The risk of default in traditional reinsurance coverage is generally quite small (ie much less than 1%) but it is still sufficient to make Cat bonds appealing due to the reduction in default risk. This behaviour is explained by the weighting function of prospect theory.
The CAS has made a number of statements about DE&I and systemic racism in insurance, including the series on race and insurance. This paper argues that the the CAS papers do not make a compelling case and that the differentation in pricing today is appropriate and reflects real differences in risk.
Two major regions devastated by climate change are Africa and Asia. However, little is known about the characteristics of the different compound climatic modes within the specified regions which is key to managing climate risks. The joint behavior of mean rainfall and temperature in Nigeria, South Africa, Ethiopia and India are thus studied in this paper. The main objective is to understand the dynamics in the tails of the joint normal conditions (using histograms) and to further zoom into different compound climatic zones namely, dry-hot, extreme dry-cold and wet-warm zones which are extracted based on selected quantile pairs for rainfall and temperature. Particular attention is given to the climatic signals of the strong dry-hot divisions, a core characteristic of droughts and heatwaves. The signals in this zone are modeled using simple harmonic motions (sinusoidal model) and white noise techniques in order to develop a locally adapted bivariate rain-temperature model that can further be incorporated into the actuarial climate risk framework. Three variants of the sinusoidal model are applied on the basis of the following noise types - non-adjusted Gaussian noise, mean-variance adjusted Gaussian noise and skew-normal noise. The quantile estimates indicate close fits at differing regions when compared to the empirical model suggesting the need for a mixture model based on the sinusoidal model variants.
This paper takes a deep dive into historical loss reserves. Using Schedule P company filings, it is shown that reserves are very slow to react to emerging losses, much slower than the most accurate approach would dictate. There are other concerns besides accuracy, such as stability and avoiding deficient reserves. But attempting to explain the discrepancy in this manner alone would require a level of risk aversion that is unrealistic. Instead, it is shown that the corporate environment causes increased conservatism, and when coupled with narrow framing is able to explain company practice.
Number and location of knots strongly impact fitted values obtained from spline regression methods. P-splines have been proposed to solve this problem by adding a smoothness penalty to the log-likelihood. This paper aims to demonstrate the strong potential of A-splines (for adaptive splines) proposed by Goepp et al. (2018) for dealing with continuous risk features in insurance studies. Adaptive ridge is used to remove the unnecessary knots from a large number of candidate knots, yielding a sparse model with high interpretability. Two applications are proposed to illustrate the performances of A-splines. First, death probabilities are graduated in a Binomial regression model. Second, continuous risk factors are included in a Poisson regression model for claim counts in motor insurance. We demonstrate that the move from technical to commercial price list can easily be achieved by using A-splines of degree 0, i.e. piecewise constant functions.
The Tweedie distribution provides a variance structure that is widely used in GLM for pure premium ratemaking. This essay suggests the quasi-Negative binominal (QNB) as an alternative. Both can be interpreted as collective risk models but the QNB has a variance structure that is more commonly used in other actuarial applications.
The Machine Learning Working Party of the CAS identified one barrier to entry for actuaries interested in machine learning (ML) as being the fact that published research in an insurance context is sparse. The purpose of this paper is to provide references and descriptions of current research to act as a guide for actuaries interested in learning more about this field and for actuaries interested in advancing research in machine learning.
The use of bootstrapping methods to evaluate the variance and range of future payments has become very popular in reserving. However, prior studies have shown that the ranges produced may be too narrow and do not always contain the actual outcomes when performing back-testing. This paper will look at some ways that the ranges determined by bootstrapping methods can be made more realistic. A central idea will be relaxing the independence assumption and allowing correlation of the random draws in the bootstrap resampling. Using a publicly available dataset from Schedule P, we show that this can improve the back testing results.
To model property/casualty insurance frequency for various lines of business, the Negative Binomial (NB) has long been the distribution of choice, despite evidence that this model often does not fit empirical data sufficiently well. Seeking a different distribution that tends to provide a better fit and is yet simple to use, we investigated the use of the Zipf Mandelbrot (ZM) distribution for fitting insurance frequency. We found, for the various lines of business and sub-groupings of data used in this research (based on increased limit factor tables published by Insurance Services Office), that the Zipf-Mandelbrot distribution regularly gave a better (often drastically better) fit to the data. The relativity-based nature of the Zipf-Mandelbrot (a Pareto-based power-law) is discussed, and several potential pros and cons of using this seemingly unknown distribution are commented on.
This paper advances the theory and methodology for quantifying reserve risk. It presents a formula for calculating the variance of unpaid losses that is based on analyzing volatility in a triangle of estimated ultimate losses. Instead of examining variability in paid or case incurred loss development, this approach focuses on the estimated ultimates. This builds on previous work by Rehman and Klugman (2010), Rehman (2016), and Seigenthaler (2019). It provides an estimate of one-year reserve risk that extends the total run-off reserve estimate presented in Rehman and Klugman. This paper addresses problems that can arise when the variance-covariance matrix in the Rehman and Klugman formula is computed from a triangle without considering that the vectors for different development ages have different sizes. These problems can give rise to unstable and anomalous results. Finally, this paper provides an estimate of parameter error. Although the methods in this paper do not capture all elements of reserve risk, they do provide a practical way to quantify the risk that is manifest in the volatility of the triangle of estimated ultimate losses.
Current methods for evaluating risk transfer, such as the ‘10/10’ rule, suffer from a few problems: They are by nature ad hoc and thus are not a direct consequence of risk transfer; they do not properly evaluate some treaties with obvious risk transfer; and they may be gamed. This paper provides alternative methods for assessing risk transfer. The primary technique is to require that the purchase of reinsurance reduces the coefficient of variation of the reinsured’s net retained losses. In many situations, a second requirement, that the reinsurer’s net profit and expenses be less than the cost (in interest) of obtaining enough additional capital to replace the proposed reinsurance, is suggested.