Viewing 1 to 25 of 7676 results
In this paper we will describe a Bayesian model for excess of loss reinsurance pricing which has many advantages over existing methods. The model is currently used in production for multiple lines of business at one of the world’s largest reinsurers. This model treats frequency and severity separately.
Generalized Linear Models (GLM) have become an insurance industry standard for classification ratemaking. However, some of the technical language used in explaining what a GLM is doing in its calculation can be obscure and intimidating to those not familiar with the tool. This paper will describe the central concept of GLM in terms of the estimating equations being solved; allowing the model to be interpreted as a set of weighted averages.
The CAS has made a number of statements about DE&I and systemic racism in insurance, including the series on race and insurance. This paper argues that the the CAS papers do not make a compelling case and that the differentation in pricing today is appropriate and reflects real differences in risk.
This article presents several actuarial applications of categorical embedding in the context of nonlife insurance risk classification. In nonlife insurance, many rating factors are naturally categorical and often the categorical variables have a large number of levels. The high cardinality of categorical rating variables presents challenges in the implementation of traditional actuarial methods.
This paper will examine the issue of social inflation trend in Medical Malpractice indemnity payments. I will demonstrate how a Calendar Year (CY) paid severity trend model can be used to identify and project social inflation. Using publicly available data, I will show how this line of business exhibits a cyclical stair step pattern which is the cornerstone of what I call the Level Shift model.
Method We use foundational actuarial concepts to build a full stochastic model of casualty catastrophe (cat) risk. Results Casualty cat modeling is achievable within the actuarial community. Conclusions An actuarial approach can be used to build a forward-looking stochastic casualty cat model.
Two major regions devastated by climate change are Africa and Asia. However, little is known about the characteristics of the different compound climatic modes within the specified regions which is key to managing climate risks. The joint behavior of mean rainfall and temperature in Nigeria, South Africa, Ethiopia and India are thus studied in this paper.
Many actuarial tasks, such as analysis of pure premiums by amount of insurance, require an analysis of data that is split among successive “buckets” along a line. Often, there is also significant randomness in the data. That results in process error volatility that affects the (usually average) values of the data within the buckets, so some smoothing of these values is needed if they are to be truly useful.
This paper analyzes the existing car insurance ratemaking system in Saudi Arabia, with the goal of assessing its readiness for novice women drivers allowed to drive only four years ago (starting June 2018). Saudi Arabia has an a posteriori ratemaking system to set the premium for the next contract period based on the policyholder’s claim history.
Autonomous driving technology has made significant progress in the U.S. in recent years. Several companies have rolled out robotaxi and driverless delivery in many cities. Autonomous driving has created a unique and interesting challenge for actuaries to assess and estimate potential on-road liability exposure for insurance pricing and other purposes.
Machine learning applications for actuarial science is an increasingly popular subject. Notably, in the field of actuarial pricing, machine learning has been an avenue to higher predictive power for anticipating future claims. Insurers are now experimenting with these algorithms but are coming up against issues of model explainability and implementation costs.
Best in class actuarial departments not only provide quality actuarial work products, they are also able to communicate useful information to business leaders to enable better business decisions. It is essential to communicate so that the business leaders understand the information conveyed and are able to take action based upon it.
In this paper, excess severity behaviors of Pareto-Exponential and Pareto-Gamma mixture are examined. Mathematical derivations are used to prove certain properties, while numerical integral computations are used to illustrate results.
People tend to express strong dislike for default risk in their insurance coverage. Survey participants demand premium reductions of over 20% for a 1% risk of default.
This project enhances the current understanding of cyber risks in the smart home ecosystem from the insurance industry's perspective. In particular, the quantitative framework and pricing strategies developed in this project can be immediately adopted/adapted by actuaries to price the cyber risks for smart homes, a fast-growing insurance market.
To model property/casualty insurance frequency for various lines of business, the Negative Binomial (NB) has long been the distribution of choice, despite evidence that this model often does not fit empirical data sufficiently well. Seeking a different distribution that tends to provide a better fit and is yet simple to use, we investigated the use of the Zipf Mandelbrot (ZM) distribution for fitting insurance frequency.
Number and location of knots strongly impact fitted values obtained from spline regression methods. P-splines have been proposed to solve this problem by adding a smoothness penalty to the log-likelihood. This paper aims to demonstrate the strong potential of A-splines (for adaptive splines) proposed by Goepp et al. (2018) for dealing with continuous risk features in insurance studies.
This paper takes a deep dive into historical loss reserves. Using Schedule P company filings, it is shown that reserves are very slow to react to emerging losses, much slower than the most accurate approach would dictate. There are other concerns besides accuracy, such as stability and avoiding deficient reserves. But attempting to explain the discrepancy in this manner alone would require a level of risk aversion that is unrealistic.
This paper aims to demonstrate how deep learning (a subset of machine learning) can be used to forecast the ultimate losses of a sample group of Property and Casualty insurance companies. The paper initially explores the concept of loss development - how losses incurred by an insurance company mature across time. These losses then reach a final amount, known as the ultimate loss.
The use of bootstrapping methods to evaluate the variance and range of future payments has become very popular in reserving. However, prior studies have shown that the ranges produced may be too narrow and do not always contain the actual outcomes when performing back-testing. This paper will look at some ways that the ranges determined by bootstrapping methods can be made more realistic.
Current methods for evaluating risk transfer, such as the ‘10/10’ rule, suffer from a few problems: They are by nature ad hoc and thus are not a direct consequence of risk transfer; they do not properly evaluate some treaties with obvious risk transfer; and they may be gamed. This paper provides alternative methods for assessing risk transfer.
This paper advances the theory and methodology for quantifying reserve risk. It presents a formula for calculating the variance of unpaid losses that is based on analyzing volatility in a triangle of estimated ultimate losses. Instead of examining variability in paid or case incurred loss development, this approach focuses on the estimated ultimates.