Browse Research

Viewing 51 to 75 of 177 results
Bootstrapping is often employed for quantifying the inherent variability of development triangle GLMs. While easy to implement, bootstrapping approaches frequently break down when dealing with actual data sets. Often this happens because linear rescaling leads to negative values in the resampled incremental develop­ment data.
In property-casualty insurance ratemaking, insurers often have access to external information which could be manual rates from a rating bureau or scores from a commercial predictive model. Such collateral information could be valuable because the insurer might either not have sufficient rating information nor the predictive modeling expertise to produce an effective score.
In this paper, we study reinsurance treaties between an insurer and a reinsurer, considering both parties’ interests. Most papers only focus on the insurer’s point of view. The latest research considering both sides has considerably oversimplified the joint survival function. This situation leads to an unrealistic optimal solution; one of the parties can make risk-free profits while the other bears all the risk.
Analysis of insurance data provides input for making decisions regarding underwriting, pricing of insurance products, and claims, as well as profitability analysis. In this paper, we consider graphical modeling as a vehicle to reveal dependency structure of categorical variables used in the Australian Auto­ mobile data. The methodology developed here may supplement the traditional approach to ratemaking.
This paper discusses some strategies to better handle the model­ing of loss development patterns. Some improvements to current curve­ and distribution­fitting strategies are shown, including the use of smoothing splines to help the modeled patterns better fit the data. A strategy is shown for applying credibility to these curves that produces results that are well­behaved and that can be implemented without the use of Bayesian software.
Excess of policy limits (XPL) losses is a phenomenon that pre­sents challenges for the practicing actuary. This paper proposes using a classic actuarial framework of frequency and severity, modified to address the unique challenge of XPL. The result is an integrated model of XPL losses together with non­XPL losses.
Claim management requires applying statistical techniques in the analysis and interpretation of the claims data. The central piece of claim management is claims modeling and prediction.
Insurance claims fraud is one of the major concerns in the insurance industry. According to many estimates, excess payments due to fraudulent claims account for a large percentage of the total payments affecting all classes of insurance.
Insurance policies often contain optional insurance coverages known as endorsements. Because these additional coverages are typically inexpensive relative to primary coverages and data can be sparse (coverages are optional), rating of endorsements is often done in an ad hoc manner after a primary analysis has been conducted.
In this paper, we present a stochastic loss development approach that models all the core components of the claims process separately. The benefits of doing so are discussed, including the provision of more accurate results by increasing the data available to analyze. This also allows for finer segmentations, which is helpful for pricing and profitability analysis.
It has been recently shown that numerical semiparametric bounds on the expected payoff of financial or actuarial instruments can be computed using semidefinite programming. However, this approach has practical limitations. Here we use column generation, a classical optimization technique, to address these limitations.
The purpose of the present paper has been to test whether loss reserving models that rely on claim count data can produce better forecasts than the chain ladder model (which does not rely on counts)—better in the sense of being subject to a lesser prediction error. The question at issue has been tested empirically by reference to the Meyers-Shi data set. Conclusions are drawn on the basis of the emerging numerical evidence.
Actuaries have always had the impression that the chain-ladder reserving method applied to real data has some kind of “upward” bias. This bias will be explained by the newly reported claims (true IBNR) and taken into account with an additive part in the age-to-age development. The multiplicative part in the development is understood to be restricted to the changes in the already reported claims (IBNER, “incurred but not enough reserved”).
To predict one variable, called the response, given another variable, called the predictor, nonparametric regression solves this problem without any assumption about the relationship between these two random variables. Traditional data, used in nonparametric regression, is a sample from the two variables; that is, it is a matrix with two complete columns.
Moment-based approximations have been extensively analyzed over the years (see, e.g., Osogami and Harchol-Balter 2006 and references therein). A number of specific phase-type (and non phase-type) distributions have been considered to tackle the moment-matching problem (see, for instance, Johnson and Taaffe 1989).
This paper introduces sequential statistical methods in actuarial science. As an example of sequential decision making that is based on the data accrued in real time, it focuses on sequential testing for full credibility. Classical statistical tools are used to determine the stopping time and the terminal decision that controls the overall error rate and power of the procedure.
This paper studies a novel capital allocation framework based on the tail mean-variance (TMV) principle for multivariate risks. The new capital allocation model has many intriguing properties, such as controlling the magnitude and variability of tail risks simultaneously. General formulas for optimal capital allocations are discussed according to the semideviation distance measure.
The concept of excess losses is widely used in reinsurance and retrospective insurance rating. The mathematics related to it has been studied extensively in the property and casualty actuarial literature. However, it seems that the formulas for higher moments of the excess losses are not readily available.
Experience shows that U.S. risk-based capital measures do not always signal financial troubles until it is too late. Here we present an alternative, reasonable capital adequacy model that can be easily implemented using data commonly available to company actuaries. The model addresses the three most significant risks common to property and casualty companies—namely, pricing, interest rate, and reserving risk.
The statistical foundation of disaster risk analysis is actual loss experience. The past cannot be changed and is treated by actuaries as fixed. But from a scientific perspective, history is just one realization of what might have happened, given the randomness and chaotic dynamics of nature. Stochastic analysis of the past is an exploratory exercise in counterfactual history, considering alternative possible scenarios.
Despite the occurrence of numerous casualty catastrophes that have had a significant impact on the insurance industry, the state of casualty catastrophe modeling lags far behind that of property catastrophes. One reason for this lag is that casualty catastrophes develop slowly, as opposed to property catastrophes that occur suddenly, so that the impact, both financial and psychological, has less of a shock element.
PEBELS is a method for estimating the expected loss cost for each loss layer of an individual property risk regardless of size. By providing maximum resolution in estimating layer loss costs, PEBELS facilitates increased accuracy and sophistication in many actuarial pricing applications such as ratemaking, predictive modeling, catastrophe modeling, and reinsurance pricing.
When building statistical models to help estimate future results, actuaries need to be aware that not only is there uncertainty inherent in random events (process risk), there is also uncertainty inherent in using a finite sample to parameterize the models (parameter risk).
A representative data set is used to provide an example comparing classical and Bayesian approaches to making inferences about the point in a sequence of random variables at which the underlying distribution may shift. Inferences about the underlying distributions themselves are also made. Most of the underlying ‘R’ code used in the analysis is shown in the appendix.  
My congratulations to Mr. Leigh J. Halliwell on this paper that clearly presents the mathematics of excess losses with an interesting example. I agree with him that the mathematics of excess losses is beautiful and powerful. However, the mathematics of excess losses also contains several subtle points that are not mentioned in the paper. This discussion note complements the article by clarifying some of these points.