Browse Research

Viewing 126 to 150 of 7679 results
This paper advocates use of the generalized logarithmic mean as the midpoint of property catastrophe reinsurance layers when fitting rates on line with power curves. It demonstrates that the method is easy to implement and overcomes issues encountered when working with usual candidates for the midpoint, such as the arithmetic, geometric, or logarithmic mean.
CAS E-Forum, Summer 2018 Featuring the report of the CAS Working Party on Sustainable ERM (SERM)
Actuaries have devised numerous methods for interpolating annual evaluation loss development factors (LDF) to arrive at quarterly evaluation factors. Not all of these work as well as might be hoped. Some introduce oscillations not found in the original factors. Many lead to IBNR projections that move erratically or have blips that are hard to explain.
CAS E-Forum, Spring 2018-Volume 2 Featuring Ratemaking Call Papers, Climate Change Call Papers and three Independent Research Papers
CAS E-Forum, Spring 2018 Featuring a report of the CAS Automated Vehicles Task Force and one Independent Research Paper
Purpose and Intended Result: This research paper is intended to fill the void in the currently available actuarial literature related to information required by the reinsurance underwriter but often lacking when pricing property per risk coverages worldwide.
I present evidence for a model in which parameters fit to the severity distribution at each report age follow a smooth curve with random error. More formally, this is a stochastic process, and it allows us to estimate parameters of the ultimate severity distribution.
Given a Bayesian Markov chain Monte Carlo (MCMC) stochastic loss reserve model for two separate lines of insurance, this paper describes how to fit a bivariate stochastic model that captures the dependencies between the two lines of insurance. A Bayesian MCMC model similar to the Changing Settlement Rate (CSR) model, as described in Meyers (2015), is initially fit to each line of insurance.
Bootstrapping is often employed for quantifying the inherent variability of development triangle GLMs. While easy to implement, bootstrapping approaches frequently break down when dealing with actual data sets. Often this happens because linear rescaling leads to negative values in the resampled incremental develop­ment data.
In property-casualty insurance ratemaking, insurers often have access to external information which could be manual rates from a rating bureau or scores from a commercial predictive model. Such collateral information could be valuable because the insurer might either not have sufficient rating information nor the predictive modeling expertise to produce an effective score.
Motivation Reserving is typically performed on aggregate claim data using familiar reserving techniques such as the chain ladder method. Rich data about individual claims is often available but is not systematically used to estimate ultimate losses. Machine learning techniques are readily available to unlock the benefits of this information, potentially resulting in more accurate reserve estimates.
When predictive performance testing, rather than testing model assumptions, is used for validation, the needs for detailed model specification are greatly reduced. Minimum bias models trade some degree of statistical independence in data points in exchange for statistically much more tame distributions underlying individual data points.
Given that cyber risk is a major driver of operational risk and that businesses and individuals are looking to the insurance industry to provide coverage for the cyber risks they face, we asked authors to “share their thoughts and reflections on either how insurance companies should deal with cyber risk in an ERM context, or how insurance companies can respond to society’s call to action to expand cybersecurity insurance offerings.” We receive
A Bayesian MCMC stochastic loss reserve model provides an arbitrarily large number of equally likely parameter sets that enable one to simulate future cash flows of the liability. Using these parameter sets to represent all future outcomes, it is possible to describe any future state in the model’s time horizon including those states necessary to calculate a cost of capital risk margin.
In this paper, we study reinsurance treaties between an insurer and a reinsurer, considering both parties’ interests. Most papers only focus on the insurer’s point of view. The latest research considering both sides has considerably oversimplified the joint survival function. This situation leads to an unrealistic optimal solution; one of the parties can make risk-free profits while the other bears all the risk.
Analysis of insurance data provides input for making decisions regarding underwriting, pricing of insurance products, and claims, as well as profitability analysis. In this paper, we consider graphical modeling as a vehicle to reveal dependency structure of categorical variables used in the Australian Auto­ mobile data. The methodology developed here may supplement the traditional approach to ratemaking.
This paper discusses some strategies to better handle the model­ing of loss development patterns. Some improvements to current curve­ and distribution­fitting strategies are shown, including the use of smoothing splines to help the modeled patterns better fit the data. A strategy is shown for applying credibility to these curves that produces results that are well­behaved and that can be implemented without the use of Bayesian software.
Excess of policy limits (XPL) losses is a phenomenon that pre­sents challenges for the practicing actuary. This paper proposes using a classic actuarial framework of frequency and severity, modified to address the unique challenge of XPL. The result is an integrated model of XPL losses together with non­XPL losses.
Claim management requires applying statistical techniques in the analysis and interpretation of the claims data. The central piece of claim management is claims modeling and prediction.
CAS E-Forum, Winter 2017 Featuring Independent Research