Browse Research

Viewing 101 to 125 of 7679 results
A Bayesian Markov chain Monte Carlo (MCMC) stochastic loss reserve model provides an arbitrarily large number of equally likely parameter sets that enable one to simulate future cash flows of the liability. Using these parameter sets to represent all future outcomes, it is possible to describe any future state in the model’s time horizon including those states necessary to calculate a cost-of-capital risk margin.
The betting industry has grown significantly but there have been no developments in creating a regulatory framework akin to the EU Solvency and Capital Requirement Directives in the Financial Services. This work derives a modular method to calculate the profit and variance of a portfolio of wagers placed with a bookmaker by subdividing these into bundles according to their likelihood size.
In this paper we consider the problem of stochastic claims reserving in the framework of development factor models (DFM). More precisely, we provide the generalized Mack chain-ladder (GMCL) model that expands the approaches of Mack (1993; 1994; 1999), Saito (2009) and Murphy, Bardis, and Majidi (2012).
We present an attribution analysis of residential insurance losses due to noncatastrophic weather events and propose a comprehensive statistical methodology for assessment of future claim dynamics in the face of climate change. We also provide valuable insight into uncertainties of the developed forecasts for claim severities with respect to various climate model projections and greenhouse emission scenarios.
This paper provides a framework actuaries can use to think about cyber risk. We propose a differentiated view of cyber versus conventional risk by separating the nature of risk arrival from the target exposed to risk. Our review synthesizes the literature on cyber risk analysis from various disciplines, including computer and network engineering, economics, and actuarial sciences.
CAS E-Forum, Winter 2019 Featuring CAS Research Working Party Report
Abstract: In this paper we describe a method of calibrating the Investment Income Offset element of the RBC Formula. Our key calibration decisions are the following: 1. We select the Present Value Approach rather than the Nominal Value Approach2. We convert the current combination of interest rate safety margins and UW risk safety targets to an equivalent UW risk safety target with no interest rate safety margin.
In this paper we analyze the Line of Business (LOB) diversification elements of the RBC Formula.We compare the diversification credit produced by the NAIC Property/Casualty RBC Formula to the indicated diversification credit, i.e., the observed reduction in risk1 with increasing diversification.
It is a well-known result that when an Euler allocation is used to allocate capital by line the overall expected return on capital can be increased by writing more business in lines where the expected return on allocated capital is greater than the overall company wide expected return. If the cost of equity capital varies by line, however, writing more business in these lines may not be the best choice for the company.
The New Zealand Earthquake Commission (EQC) started using DFA (Dynamic Financial Analysis)1 in 1994 and has used DFA commercially ever since. EQC was one of the pioneers in the application of DFA to the insurance industry. Other pioneering users at the same time are described in four papers in the Casualty Actuarial Society Forum, Spring, 1996.
In this paper we apply a simple regression model to link performance of a D&O insurance line of business to the S&P 500 economic variable from an economic scenario generator (ESG). The regression structure is incorporated into an existing economic capital model. The distribution of the error term is constrained so that the final distribution of the D&O line is equivalent to the distribution previously used.
CAS E-Forum, Summer 2019 Volume 2 Featuring the Non-Technical Reserving Call Papers
CAS E-Forum, Summer 2019 Featuring a CAS Research Working Parties Report
The NAIC RBC Formula treatment of line of business (LOB) diversification (referred to in this paper as the CoMaxLine% Approach) is very different from the Solvency II Standard Formula treatment.
Motivation Application of the Shane-Morelli method in practice for multiple reserve reviews revealed potential areas of refinement. Method A theoretical examination of curves best used to develop workers’ compensation tail factors resulted in a proposed enhancement to this part of the original methodology.
CAS E-Forum, Spring 2019 Featuring CAS Research Working Parties' Reports
As the level of competition increases, pricing optimization is gaining a central role in most mature insurance markets, forcing insurers to optimize their rating and consider customer behavior; the modeling scene for the latter is one currently dominated by frameworks based on generalized linear models (GLMs).
Composite distributions have well-known applications in the insurance industry. In this paper, a composite exponential-Pareto distribution is considered, and the Bayes estimator under the squared error loss function is derived for the parameter q, which is the boundary point for the supports of the two distributions.
Misrepresentation is a type of insurance fraud that happens frequently in policy applications. Due to the unavailability of data, such frauds are usually expensive or difficult to detect. Based on the distributional structure of regular ratemaking data, we propose a generalized linear model (GLM) framework that allows for an embedded predictive analysis on the misrepresentation risk.
When predictive performance testing, rather than testing model assumptions, is used for validation, the need for detailed model specification is greatly reduced. Minimum bias models trade some degree of statistical independence in data points in exchange for statistically much more tame distributions underlying individual data points.
Predictive modeling is arguably one of the most important tasks actuaries face in their day-to-day work. In practice, actuaries may have a number of reasonable models to consider, all of which will provide different predictions. The most common strategy is first to use some kind of model selection tool to select a “best model” and then to use that model to make predictions.