Browse Research

Viewing 1 to 25 of 178 results
The purpose of this article is to provide a computational tool via Maximum Likelihood (ML) and Markov Chain Mont Carlo (MCMC) methods for estimating the renewal function when the inter-arrival distribution of a renewal process is single-parameter Pareto (SPP). The proposed method has applications in a variety of applied fields such as insurance modeling and modeling self-similar network traffic, to name a few.
In building predictive models for actuarial losses, it is typical to trend the target loss variable with an actuarially-derived trend rate. This paper shows that such practice creates elusive predictive biases that are costly to an insurance book of business, and proposes more effective ways for contemplating trends in actuarial predictive models.
It is often difficult to know which models to use when setting rates for auto insurance. We develop a market-based model selection procedure which incorporates the goals of the business. Additionally, it is easier to interpret the results and better understand the models and data.
The frequency and impact of cybersecurity incidents increases every year, with data breaches, often caused by malicious attackers, among the most costly, damaging, and pervasive.
The paper develops a policy-level unreported claim frequency distribution for use in individual claim reserving models. Recently, there has been increased interest in using individual claim detail to estimate reserves and to understand variability around reserve estimates. The method we describe can aid in the estimation/simulation of pure incurred but not reported (IBNR) from individual claim and policy data.
We develop Gaussian process (GP) models for incremental loss ratios in loss development triangles. Our approach brings a machine learning, spatial-based perspective to stochastic loss modeling. GP regression offers a nonparametric probabilistic distribution regarding future losses, capturing uncertainty quantification across three distinct layers—model risk, correlation risk, and extrinsic uncertainty due to randomness in observed losses.
In ratemaking, calculation of a pure premium has traditionally been based on modeling frequency and severity in an aggregated claims model. For simplicity, it has been a standard practice to assume the independence of loss frequency and loss severity. In recent years, there has been sporadic interest in the actuarial literature exploring models that depart from this independence.
Maximum likelihood estimation has been the workhorse of statistics for decades, but alternative methods, going under the name “regularization,” are proving to have lower predictive variance. Regularization shrinks fitted values toward the overall mean, much like credibility does. There is good software available for regularization, and in particular, packages for Bayesian regularization make it easy to fit more complex models.
Analysis of truncated and censored data is a familiar part of actuarial practice, and so far the product-limit methodology, with Kaplan-Meier estimator being its vanguard, has been the main statistical tool. At the same time, for the case of directly observed data, the sample mean methodology yields both efficient estimation and dramatically simpler statistical inference.
The Bornhuetter-Ferguson method is among the more popular methods of projecting non-life paid or incurred triangles. For this method, Thomas Mack developed a stochastic model allowing the estimation of the prediction error resulting from such projections. Mack’s stochastic model involves a parametrization of the Bornhuetter-Ferguson method based on incremental triangles of incurred or paid.
Capital allocation is an essential task for risk pricing and performance measurement of insurance business lines. This paper provides a survey of existing capital allocation methods, including common approaches based on the gradients of risk measures and economic allocation arising from counterparty risk aversion. We implement all methods in two example settings: binomial losses and loss realizations from a catastrophe reinsurer.
In this paper, we propose a generalization of the individual loss reserving model introduced by Pigeon et al. (2013) considering a discrete time framework for claims development. We use a copula to model the potential dependence within the development structure of a claim, which allows a wide variety of marginal distributions. We also add a specific component to consider claims closed without payment.
The concept of risk distribution, or aggregating risk to reduce the potential volatility of loss results, is a prerequisite for an insurance transaction. But how much risk distribution is enough for a transaction to qualify as insurance? This paper looks at different methods that can be used to answer that question and ascertain whether or not risk distribution has been achieved from an actuarial point of view.
This paper demonstrates an approach to apply the lasso variable shrinkage and selection method to loss models arising in actuarial science. Specifically, the group lasso penalty is applied to the GB2 distribution, which is a popular distribution used often in actuarial research nowadays.
The classical credibility theory circumvents the challenge of finding the bona fide Bayesian estimate (with respect to the square loss) by restricting attention to the class of linear estimators of data. See, for example, Bühlmann and Gisler (2005) and Klugman et al. (2008) for a detailed treatment.
  Properly modeling changes over time is essential for forecasting and important for any model or process with data that span multiple time periods. Despite this, most approaches used are ad hoc or lack a statistical framework for making accurate forecasts.
In this paper we will analyze the model introduced in Siegenthaler (2017). The author promises to present estimators for the one-year (solvency) as well as the ultimate uncertainty of estimated ultimate claim amounts that neither depend on any claim data nor on the reserving method used to estimate these ultimates. Unfortunately, the model cannot fulfill this promise: it only corrects for some bias in the estimated ultimates.
  An excess loss factor is a measure of expected loss that is in excess of a given per-occurrence limit. The National Council on Compensation Insurance (NCCI) uses excess loss factors in its retrospective rating plan as well as in aggregate and class ratemaking.
Because insured losses are positive, loss distributions start from zero and are right-tailed. However, residuals, or errors, are centered about a mean of zero and have both right and left tails. Seldom do error terms from models of insured losses seem normal. Usually they are positively skewed, rather than symmetric. And their right tails, as measured by their asymptotic failure rates, are heavier than that of the normal.
Several approximations for the distribution of aggregate claims have been proposed in the literature. In this paper, we have developed a saddlepoint approximation for the aggregate claims distribution and compared it with some existing approximations, such as NP2, gamma, IG, and gamma-IG mixture.
The concept of bias-variance tradeoff provides a mathematical basis for understanding the common modeling problem of under-fitting vs. overfitting. While bias-variance tradeoff is a standard topic in machine learning discussions, the terminology and application differ from that of actuarial literature. In this paper we demystify the bias-variance decomposition by providing a detailed foundation for the theory.