Browse Research

Viewing 176 to 200 of 7679 results
Insurance policies often contain optional insurance coverages known as endorsements. Because these additional coverages are typically inexpensive relative to primary coverages and data can be sparse (coverages are optional), rating of endorsements is often done in an ad hoc manner after a primary analysis has been conducted.
In this paper, we present a stochastic loss development approach that models all the core components of the claims process separately. The benefits of doing so are discussed, including the provision of more accurate results by increasing the data available to analyze. This also allows for finer segmentations, which is helpful for pricing and profitability analysis.
Generalized linear models have been in use for over thirty years, and there is no shortage of textbooks and scholarly articles on their underlying theory and application in solving any number of useful problems. Actuaries have for many years used GLMs to classify risks, but it is only relatively recently that levels of interest and rates of adoption have increased to the point where it now seems as though they are near-ubiquitous.
The purpose of the monograph is to provide access to generalized linear models for loss reserving but initially with strong emphasis on the chain ladder. The chain ladder is formulated in a GLM context, as is the statistical distribution of the loss reserve. This structure is then used to test the need for departure from the chain ladder model and to formulate any required model extensions.
This paper provides an accessible account of potential pitfalls in the use of predictive models in property and casualty insurance. With a series of entertaining vignettes, it illustrates what can go wrong. The paper should leave the reader with a better appreciation of when predictive modeling is the tool of choice and when it needs to be used with caution. Keywords: Predictive modeling, GLM
Given a Bayesian Markov Chain Monte Carlo (MCMC) stochastic loss reserve model for two separate lines of insurance, this paper describes how to fit a bivariate stochastic model that captures the dependencies between the two lines of insurance. A Bayesian MCMC model similar to the Changing Settlement Rate (CSR)model, as described in Meyers (2015), is initially fit to each line of insurance.
The global insurance protection gap is one of the most pressing issues facing our society. It leads to a severe lack of societal resilience in many developing and emerging countries, where insurance today hardly plays any role when it comes to mitigating the impacts of natural disasters or pandemics, to name just two of the major societal risks. More often than not, all economic losses remain basically uninsured.
In spite of its increasing relevance for businesses today, research on cyber risk is limited. Many papers have been devoted to the technological aspects, but relatively little research has been published in the business and economics literature. The existing articles emphasize the lack of data and the modelling challenges (e.g. Maillart and Sornette 2010; Biener, Eling and Wirfs, 2015), the complexity and dependent risk structure (e.g.
It has been recently shown that numerical semiparametric bounds on the expected payoff of financial or actuarial instruments can be computed using semidefinite programming. However, this approach has practical limitations. Here we use column generation, a classical optimization technique, to address these limitations.
The purpose of the present paper has been to test whether loss reserving models that rely on claim count data can produce better forecasts than the chain ladder model (which does not rely on counts)—better in the sense of being subject to a lesser prediction error. The question at issue has been tested empirically by reference to the Meyers-Shi data set. Conclusions are drawn on the basis of the emerging numerical evidence.
Actuaries have always had the impression that the chain-ladder reserving method applied to real data has some kind of “upward” bias. This bias will be explained by the newly reported claims (true IBNR) and taken into account with an additive part in the age-to-age development. The multiplicative part in the development is understood to be restricted to the changes in the already reported claims (IBNER, “incurred but not enough reserved”).
To predict one variable, called the response, given another variable, called the predictor, nonparametric regression solves this problem without any assumption about the relationship between these two random variables. Traditional data, used in nonparametric regression, is a sample from the two variables; that is, it is a matrix with two complete columns.
Moment-based approximations have been extensively analyzed over the years (see, e.g., Osogami and Harchol-Balter 2006 and references therein). A number of specific phase-type (and non phase-type) distributions have been considered to tackle the moment-matching problem (see, for instance, Johnson and Taaffe 1989).
This paper introduces sequential statistical methods in actuarial science. As an example of sequential decision making that is based on the data accrued in real time, it focuses on sequential testing for full credibility. Classical statistical tools are used to determine the stopping time and the terminal decision that controls the overall error rate and power of the procedure.
This paper studies a novel capital allocation framework based on the tail mean-variance (TMV) principle for multivariate risks. The new capital allocation model has many intriguing properties, such as controlling the magnitude and variability of tail risks simultaneously. General formulas for optimal capital allocations are discussed according to the semideviation distance measure.
The concept of excess losses is widely used in reinsurance and retrospective insurance rating. The mathematics related to it has been studied extensively in the property and casualty actuarial literature. However, it seems that the formulas for higher moments of the excess losses are not readily available.
Experience shows that U.S. risk-based capital measures do not always signal financial troubles until it is too late. Here we present an alternative, reasonable capital adequacy model that can be easily implemented using data commonly available to company actuaries. The model addresses the three most significant risks common to property and casualty companies—namely, pricing, interest rate, and reserving risk.
The statistical foundation of disaster risk analysis is actual loss experience. The past cannot be changed and is treated by actuaries as fixed. But from a scientific perspective, history is just one realization of what might have happened, given the randomness and chaotic dynamics of nature. Stochastic analysis of the past is an exploratory exercise in counterfactual history, considering alternative possible scenarios.
Despite the occurrence of numerous casualty catastrophes that have had a significant impact on the insurance industry, the state of casualty catastrophe modeling lags far behind that of property catastrophes. One reason for this lag is that casualty catastrophes develop slowly, as opposed to property catastrophes that occur suddenly, so that the impact, both financial and psychological, has less of a shock element.
This paper is concerned with dependency between business segments in the Property & Casualty industry. When considering the business of an insurance company at the aggregate level, dependence structures can have a major impact in several areas of Enterprise Risk Management, such as in claims reserving and capital modelling.
Actuarial research has aimed to tame uncertainty, typically by building on two premises: (a) the malleability of uncertainty to quantification and (b) the separability of quantitative modelling from decision principles. We argue that neither of the two premises holds true. The first ignores deeper – ‘ontological’ and ‘framing’– uncertainties, which do not lend themselves to quantification.
Starting in 2012 and continuing for several years since, I have been witness to a burst of innovation in the seemingly staid world of crop insurance, which is too often dismissed as an obscure backwater of the U.S. insurance industry.
We generally do not tend to think of innovation and risk management as compatible themes. Innovation conjures up images of bold new ideas, thinking outside the box, disrupting established ways of doing things, and breaking new ground.
There are many papers that describe the over-dispersed Poisson (ODP) bootstrap model, but these papers are either limited to the basic calculations of the model or focus on the theoretical aspects of the model and always implicitly assume that the ODP bootstrap model is perfectly suited to the data being analyzed.
“The Actuary and IBNR” was published in 1972 by Ronald Bornhuetter and Ronald Ferguson. The methodology from this paper has exploded into a veritably universal methodology used by actuaries and commonly referred to as the “Born Ferg” or “BF” method. The technique and its application are included in the syllabus for the CAS actuarial exams and the use of the technique is pervasive in both the reserving and pricing worlds.