Browse Research
Viewing 51 to 75 of 188 results
2019
In this paper we consider the problem of stochastic claims reserving in the framework of development factor models (DFM). More precisely, we provide the generalized Mack chain-ladder (GMCL) model that expands the approaches of Mack (1993; 1994; 1999), Saito (2009) and Murphy, Bardis, and Majidi (2012).
2019
We present an attribution analysis of residential insurance losses due to noncatastrophic weather events and propose a comprehensive statistical methodology for assessment of future claim dynamics in the face of climate change. We also provide valuable insight into uncertainties of the developed forecasts for claim severities with respect to various climate model projections and greenhouse emission scenarios.
2019
This paper provides a framework actuaries can use to think about cyber risk. We propose a differentiated view of cyber versus conventional risk by separating the nature of risk arrival from the target exposed to risk. Our review synthesizes the literature on cyber risk analysis from various disciplines, including computer and network engineering, economics, and actuarial sciences.
2018
As the level of competition increases, pricing optimization is gaining a central role in most mature insurance markets, forcing insurers to optimize their rating and consider customer behavior; the modeling scene for the latter is one currently dominated by frameworks based on generalized linear models (GLMs).
2018
Composite distributions have well-known applications in the insurance industry. In this paper, a composite exponential-Pareto distribution is considered, and the Bayes estimator under the squared error loss function is derived for the parameter q, which is the boundary point for the supports of the two distributions.
2018
Misrepresentation is a type of insurance fraud that happens frequently in policy applications. Due to the unavailability of data, such frauds are usually expensive or difficult to detect. Based on the distributional structure of regular ratemaking data, we propose a generalized linear model (GLM) framework that allows for an embedded predictive analysis on the misrepresentation risk.
2018
When predictive performance testing, rather than testing model assumptions, is used for validation, the need for detailed model specification is greatly reduced. Minimum bias models trade some degree of statistical independence in data points in exchange for statistically much more tame distributions underlying individual data points.
2018
Predictive modeling is arguably one of the most important tasks actuaries face in their day-to-day work. In practice, actuaries may have a number of reasonable models to consider, all of which will provide different predictions. The most common strategy is first to use some kind of model selection tool to select a “best model” and then to use that model to make predictions.
2018
This paper advocates use of the generalized logarithmic mean as the midpoint of property catastrophe reinsurance layers when fitting rates on line with power curves. It demonstrates that the method is easy to implement and overcomes issues encountered when working with usual candidates for the midpoint, such as the arithmetic, geometric, or logarithmic mean.
2017
I present evidence for a model in which parameters fit to the severity distribution at each report age follow a smooth curve with random error. More formally, this is a stochastic process, and it allows us to estimate parameters of the ultimate severity distribution.
2017
Given a Bayesian Markov chain Monte Carlo (MCMC) stochastic loss reserve model for two separate lines of insurance, this paper describes how to fit a bivariate stochastic model that captures the dependencies between the two lines of insurance. A Bayesian MCMC model similar to the Changing Settlement Rate (CSR) model, as described in Meyers (2015), is initially fit to each line of insurance.
2017
Bootstrapping is often employed for quantifying the inherent variability of development triangle GLMs. While easy to implement, bootstrapping approaches frequently break down when dealing with actual data sets. Often this happens because linear rescaling leads to negative values in the resampled incremental development data.
2017
In property-casualty insurance ratemaking, insurers often have access to external information which could be manual rates from a rating bureau or scores from a commercial predictive model. Such collateral information could be valuable because the insurer might either not have sufficient rating information nor the predictive modeling expertise to produce an effective score.
2017
In this paper, we study reinsurance treaties between an insurer and a reinsurer, considering both parties’ interests. Most papers only focus on the insurer’s point of view. The latest research considering both sides has considerably oversimplified the joint survival function. This situation leads to an unrealistic optimal solution; one of the parties can make risk-free profits while the other bears all the risk.
2017
Analysis of insurance data provides input for making decisions regarding underwriting, pricing of insurance products, and claims, as well as profitability analysis. In this paper, we consider graphical modeling as a vehicle to reveal dependency structure of categorical variables used in the Australian Auto mobile data. The methodology developed here may supplement the traditional approach to ratemaking.
2017
This paper discusses some strategies to better handle the modeling of loss development patterns. Some improvements to current curve and distributionfitting strategies are shown, including the use of smoothing splines to help the modeled patterns better fit the data. A strategy is shown for applying credibility to these curves that produces results that are wellbehaved and that can be implemented without the use of Bayesian software.
2017
Excess of policy limits (XPL) losses is a phenomenon that presents challenges for the practicing actuary. This paper proposes using a classic actuarial framework of frequency and severity, modified to address the unique challenge of XPL. The result is an integrated model of XPL losses together with nonXPL losses.
2017
Claim management requires applying statistical techniques in the analysis and interpretation of the claims data. The central piece of claim management is claims modeling and prediction.
2016
Insurance claims fraud is one of the major concerns in the insurance industry. According to many estimates, excess payments due to fraudulent claims account for a large percentage of the total payments affecting all classes of insurance.
2016
Insurance policies often contain optional insurance coverages known as endorsements. Because these additional coverages are typically inexpensive relative to primary coverages and data can be sparse (coverages are optional), rating of endorsements is often done in an ad hoc manner after a primary analysis has been conducted.
2016
In this paper, we present a stochastic loss development approach that models all the core components of the claims process separately. The benefits of doing so are discussed, including the provision of more accurate results by increasing the data available to analyze. This also allows for finer segmentations, which is helpful for pricing and profitability analysis.
2016
It has been recently shown that numerical semiparametric bounds on the expected payoff of financial or actuarial instruments can be computed using semidefinite programming. However, this approach has practical limitations. Here we use column generation, a classical optimization technique, to address these limitations.
2016
The purpose of the present paper has been to test whether loss reserving models that rely on claim count data can produce better forecasts than the chain ladder model (which does not rely on counts)—better in the sense of being subject to a lesser prediction error.
The question at issue has been tested empirically by reference to the Meyers-Shi data set. Conclusions are drawn on the basis of the emerging numerical evidence.
2016
Actuaries have always had the impression that the chain-ladder reserving method applied to real data has some kind of “upward” bias. This bias will be explained by the newly reported claims (true IBNR) and taken into account with an additive part in the age-to-age development. The multiplicative part in the development is understood to be restricted to the changes in the already reported claims (IBNER, “incurred but not enough reserved”).
2016
To predict one variable, called the response, given another variable, called the predictor, nonparametric regression solves this problem without any assumption about the relationship between these two random variables. Traditional data, used in nonparametric regression, is a sample from the two variables; that is, it is a matrix with two complete columns.