Browse Research

Viewing 26 to 50 of 7679 results
Number and location of knots strongly impact fitted values obtained from spline regression methods. P-splines have been proposed to solve this problem by adding a smoothness penalty to the log-likelihood. This paper aims to demonstrate the strong potential of A-splines (for adaptive splines) proposed by Goepp et al. (2018) for dealing with continuous risk features in insurance studies.
This paper takes a deep dive into historical loss reserves. Using Schedule P company filings, it is shown that reserves are very slow to react to emerging losses, much slower than the most accurate approach would dictate. There are other concerns besides accuracy, such as stability and avoiding deficient reserves. But attempting to explain the discrepancy in this manner alone would require a level of risk aversion that is unrealistic.
This paper aims to demonstrate how deep learning (a subset of machine learning) can be used to forecast the ultimate losses of a sample group of Property and Casualty insurance companies. The paper initially explores the concept of loss development - how losses incurred by an insurance company mature across time. These losses then reach a final amount, known as the ultimate loss.
The use of bootstrapping methods to evaluate the variance and range of future payments has become very popular in reserving. However, prior studies have shown that the ranges produced may be too narrow and do not always contain the actual outcomes when performing back-testing. This paper will look at some ways that the ranges determined by bootstrapping methods can be made more realistic.
Current methods for evaluating risk transfer, such as the ‘10/10’ rule, suffer from a few problems: They are by nature ad hoc and thus are not a direct consequence of risk transfer; they do not properly evaluate some treaties with obvious risk transfer; and they may be gamed. This paper provides alternative methods for assessing risk transfer.
This paper advances the theory and methodology for quantifying reserve risk. It presents a formula for calculating the variance of unpaid losses that is based on analyzing volatility in a triangle of estimated ultimate losses. Instead of examining variability in paid or case incurred loss development, this approach focuses on the estimated ultimates.
In finance, the LaPlace transform is used to calculate the distribution of stochastic present value. There are several practical impediments to the use of the LaPlace transform in actuarial science: we lack a physical interpretation of the transform, it requires a change in perspective to a frame of reference that we seldom use, and it involves complex arithmetic.
The Machine Learning Working Party of the CAS identified one barrier to entry for actuaries interested in machine learning (ML) as being the fact that published research in an insurance context is sparse. The purpose of this paper is to provide references and descriptions of current research to act as a guide for actuaries interested in learning more about this field and for actuaries interested in advancing research in machine learning.
The Tweedie distribution provides a variance structure that is widely used in GLM for pure premium ratemaking. This essay suggests the quasi-Negative binominal (QNB) as an alternative. Both can be interpreted as collective risk models but the QNB has a variance structure that is more commonly used in other actuarial applications.
After the chain ladder and the Bornhuetter-Ferguson method, the Cape Cod reserving method is among the most popular methods used to project non-life paid or incurred triangles. For this method, Saluz (2015) developed a stochastic model allowing the estimation of the prediction error resulting from such projections. This stochastic model involves a parameterization of the Cape Cod method based on incremental triangles of incurred or paid.
In building predictive models for actuarial losses, it is typical to trend the target loss variable with an actuarially-derived trend rate. This paper shows that such practice creates elusive predictive biases that are costly to an insurance book of business, and proposes more effective ways for contemplating trends in actuarial predictive models.
It is often difficult to know which models to use when setting rates for auto insurance. We develop a market-based model selection procedure which incorporates the goals of the business. Additionally, it is easier to interpret the results and better understand the models and data.
The frequency and impact of cybersecurity incidents increases every year, with data breaches, often caused by malicious attackers, among the most costly, damaging, and pervasive.
Flood represents one of the costliest and most disruptive natural disasters in
The paper develops a policy-level unreported claim frequency distribution for use in individual claim reserving models. Recently, there has been increased interest in using individual claim detail to estimate reserves and to understand variability around reserve estimates. The method we describe can aid in the estimation/simulation of pure incurred but not reported (IBNR) from individual claim and policy data.
We develop Gaussian process (GP) models for incremental loss ratios in loss development triangles. Our approach brings a machine learning, spatial-based perspective to stochastic loss modeling. GP regression offers a nonparametric probabilistic distribution regarding future losses, capturing uncertainty quantification across three distinct layers—model risk, correlation risk, and extrinsic uncertainty due to randomness in observed losses.
Hierarchical compartmental reserving models give a parametric framework to describe aggregate insurance claims processes using differential equations.
Although available since the 1990s, cyber insurance is still a relatively new product that is ever-changing. The report uses a conceptual approach to identify and evaluate potential exposure measures for cyber insurance. In particular, the report studies the losses that can arise with each cyber insurance coverage and identifies potential exposure measures related to these losses.
The phenomenon of social inflation has garnered a great deal of attention in the property and casualty (P&C) insurance industry. The term defies strict definition, though it is widely acknowledged to involve excessive growth in insurance settlements. We examine evidence for its existence in standard industrywide claims triangles through 2019.
This paper serves as a basic guide to economic scenario generators (ESGs), with an emphasis on applications for the property-casualty insurance industry. An ESG is a computer-based model that provides simulated examples of possible future values of various economic and financial variables.
In ratemaking, calculation of a pure premium has traditionally been based on modeling frequency and severity in an aggregated claims model. For simplicity, it has been a standard practice to assume the independence of loss frequency and loss severity. In recent years, there has been sporadic interest in the actuarial literature exploring models that depart from this independence.