Browse Research

Viewing 26 to 50 of 177 results
Risk aggregation is virtually everywhere in insurance applications. Indeed, in the vast majority of situations, insurers are interested in the properties of the sums of the risks they are exposed to, rather than in the stand-alone risks per se. Unfortunately, the problem of formulating the probability distributions of the aforementioned sums is rather involved, and as a rule does not have an explicit solution.
This paper discusses an alternative approach to utilizing and credibility weighting the excess loss information for large account pricing. The typical approach is to analyze the burn costs in each excess layer directly (see Clark 2011, for example). Burn costs are extremely volatile in addition to being highly right skewed, which does not perform well with linear credibility methods, such as Buhlmann-Straub or similar methods (Venter 2003).
The standard method for calculating reserves for permanently injured worker benefits (indemnity and medical) is a combination of adjuster-estimated case reserves and reserves for incurred but not reported claims (IBNR) using a triangle method. There has been some interest in other reserving methodologies based on a calculation of future payments for the expected lifetime of the injured worker using a table of mortality rates.
This paper demonstrates a Bayesian approach for estimating loss costs associated with excess of loss reinsurance programs.
This paper proposes a method to derive paid tail factors using incurred tail factors and historical payout patterns. Traditionally, a ratio of paid-to-incurred losses—and its reciprocal, the conversion factor—may be used to convert payments at a specific maturity to incurred losses, prior to attaching an incurred tail factor. The implied paid tail factor would be the product of the incurred tail factor and the selected conversion factor.
This paper proposes efficient statistical tools to detect which risk factors influence insurance losses before fitting a regres-sion model. The statistical procedures are nonparametric and designed according to the format of the variables commonly encountered in P&C ratemaking: continuous, integer-valued (or discrete) or categorical.
Rating areas are commonly used to capture unexplained geographical variability of claims in insurance pricing. A new method for defining rating areas is proposed using a two-part generalized geoadditive model that models spatial effects smoothly using Gaussian Markov random fields. The first part handles zero/nonzero expenses in a logistic model; the second handles nonzero expenses (on log-scale) in a linear model.
An actuarial approach for calculating a relativity based on geographic diversification is presented. The method models correlation as a function of distance between two exposures, and uses that to calculate a risk margin for each policy. It assumes that any premium provision for a company risk margin is currently allocated in proportion to policy risk-free premium, which results in a uniform risk-loading uprate for all policies.
Split credibility has been used in practice for several decades, though its foundational theory has been investigated only recently. This paper studies the properties of the primary loss and the excess loss in the split experience plan of the National Council on Compensation Insurance (NCCI). We first revisit the claim that the excess loss is more volatile than the total loss.
Very similar modeling is done for actuarial models in loss reserving and mortality projection. Both start with incomplete data rectangles, traditionally called triangles, and model the data by year of origin, year of observation, and lag from origin to observation.
In volume 8, no. 2 of Variance, a technique using actuarial present value was applied to infrastructure service contracts (ISCs) as a way to manage obsolescence in portfolios of fixed, physical capital assets. The theory put forth in that paper was purely deductive and used basic financial mathematics to posit some untested hypotheses.
This paper presents closed-form formulas in order to estimate, based on the historical triangle of ultimate estimates, both the one-year and the total run-off reserve risk.
A Bayesian Markov chain Monte Carlo (MCMC) stochastic loss reserve model provides an arbitrarily large number of equally likely parameter sets that enable one to simulate future cash flows of the liability. Using these parameter sets to represent all future outcomes, it is possible to describe any future state in the model’s time horizon including those states necessary to calculate a cost-of-capital risk margin.
The betting industry has grown significantly but there have been no developments in creating a regulatory framework akin to the EU Solvency and Capital Requirement Directives in the Financial Services. This work derives a modular method to calculate the profit and variance of a portfolio of wagers placed with a bookmaker by subdividing these into bundles according to their likelihood size.
In this paper we consider the problem of stochastic claims reserving in the framework of development factor models (DFM). More precisely, we provide the generalized Mack chain-ladder (GMCL) model that expands the approaches of Mack (1993; 1994; 1999), Saito (2009) and Murphy, Bardis, and Majidi (2012).
We present an attribution analysis of residential insurance losses due to noncatastrophic weather events and propose a comprehensive statistical methodology for assessment of future claim dynamics in the face of climate change. We also provide valuable insight into uncertainties of the developed forecasts for claim severities with respect to various climate model projections and greenhouse emission scenarios.
This paper provides a framework actuaries can use to think about cyber risk. We propose a differentiated view of cyber versus conventional risk by separating the nature of risk arrival from the target exposed to risk. Our review synthesizes the literature on cyber risk analysis from various disciplines, including computer and network engineering, economics, and actuarial sciences.
As the level of competition increases, pricing optimization is gaining a central role in most mature insurance markets, forcing insurers to optimize their rating and consider customer behavior; the modeling scene for the latter is one currently dominated by frameworks based on generalized linear models (GLMs).
Composite distributions have well-known applications in the insurance industry. In this paper, a composite exponential-Pareto distribution is considered, and the Bayes estimator under the squared error loss function is derived for the parameter q, which is the boundary point for the supports of the two distributions.
Misrepresentation is a type of insurance fraud that happens frequently in policy applications. Due to the unavailability of data, such frauds are usually expensive or difficult to detect. Based on the distributional structure of regular ratemaking data, we propose a generalized linear model (GLM) framework that allows for an embedded predictive analysis on the misrepresentation risk.
When predictive performance testing, rather than testing model assumptions, is used for validation, the need for detailed model specification is greatly reduced. Minimum bias models trade some degree of statistical independence in data points in exchange for statistically much more tame distributions underlying individual data points.
Predictive modeling is arguably one of the most important tasks actuaries face in their day-to-day work. In practice, actuaries may have a number of reasonable models to consider, all of which will provide different predictions. The most common strategy is first to use some kind of model selection tool to select a “best model” and then to use that model to make predictions.
This paper advocates use of the generalized logarithmic mean as the midpoint of property catastrophe reinsurance layers when fitting rates on line with power curves. It demonstrates that the method is easy to implement and overcomes issues encountered when working with usual candidates for the midpoint, such as the arithmetic, geometric, or logarithmic mean.
I present evidence for a model in which parameters fit to the severity distribution at each report age follow a smooth curve with random error. More formally, this is a stochastic process, and it allows us to estimate parameters of the ultimate severity distribution.
Given a Bayesian Markov chain Monte Carlo (MCMC) stochastic loss reserve model for two separate lines of insurance, this paper describes how to fit a bivariate stochastic model that captures the dependencies between the two lines of insurance. A Bayesian MCMC model similar to the Changing Settlement Rate (CSR) model, as described in Meyers (2015), is initially fit to each line of insurance.