Browse Research

Viewing 151 to 175 of 7675 results
Traditional reserve estimators such as chain-ladder and Bornhuetter-Ferguson model unpaid losses as a function of accident period versus lag to payment or reporting. The result of primary interest is expected future losses; these are derived from intermediate results such as lag factors and loss ratios.
SIMEX (Simulation-Extrapolation) is a very general technique that helps to correct for bias in estimates caused by errors in measurements of predictors. The method is well established in statistical practice, but seems to not be as widely known in actuarial circles. Using ordinary least squares regression as an example, the method is illustrated using some simple R code.
This paper illustrates a comprehensive approach to utilizing and credibility weighting all available information for large account and excess of loss treaty pricing. The typical approach to considering the loss experience above the basic limit is to analyze the burn costs in these excess layers directly (see Clark 2011, for example).
The mathematical foundation of on-leveling premium is explicitly stated. This is combined with an appropriate set of assumptions to derive the formulae for on-leveling premium by rate book (described within) and for using the Parallelogram Method. It is demonstrated in an appendix that this foundation subsumes all works in the bibliography.
This compendium summarizes the various aspects of credit risk that are important to insurance companies in general, namely corporate credit risk (single and multi-name), typical credit-sensitive securities, credit risk for individuals (including mortgage insurance), municipal credit risk, sovereign credit risk, counterparty risk, and regulatory and enterprise risk management.
We present and discuss an insurance version of the classical Capital Asset Pricing Model that offers economic pricing and risk capital allocation rules for a large class of risks, including those that are non- symmetric and heavy tailed. A number of illustrative examples are given, and convenient computational formulas suggested.
CAS E-Forum, Spring 2017 Volume 2 Featuring Two Reinsurance Call Papers
CAS E-Forum, Spring 2017 Featuring two CAS-Sponsored Research Reports, Ratemaking Call Papers and Independent Research
Motivation The Hayne MLE family of models are quite elegant in their application, but like most models in order to address the needs of the practicing actuary the modeling framework needs to allow for the flexibility to deal with many different practical issues.
Motivation The development of a wide variety of reserve variability models has been primarily driven by the need to quantify reserve uncertainty. This quantification can serve as the basis for satisfying a number of Solvency II requirements in Europe, can be used to enhance Own Risk Solvency Assessment (ORSA) reports, and is often used as an input to DFA or Dynamic Risk Models, to name but a few.
Motivation.Supervised Learning - building predictive models based on past examples-is an important part of Machine Learning and contains a vast and ever increasing array of techniques that can be used by Actuaries alongside more traditional methods. Underlying many Supervised Learning techniques are a small number of important concepts which are also relevant to many areas of actuarial practice.
Insurance claims fraud is one of the major concerns in the insurance industry. According to many estimates, excess payments due to fraudulent claims account for a large percentage of the total payments affecting all classes of insurance.
Insurance policies often contain optional insurance coverages known as endorsements. Because these additional coverages are typically inexpensive relative to primary coverages and data can be sparse (coverages are optional), rating of endorsements is often done in an ad hoc manner after a primary analysis has been conducted.
In this paper, we present a stochastic loss development approach that models all the core components of the claims process separately. The benefits of doing so are discussed, including the provision of more accurate results by increasing the data available to analyze. This also allows for finer segmentations, which is helpful for pricing and profitability analysis.
Generalized linear models have been in use for over thirty years, and there is no shortage of textbooks and scholarly articles on their underlying theory and application in solving any number of useful problems. Actuaries have for many years used GLMs to classify risks, but it is only relatively recently that levels of interest and rates of adoption have increased to the point where it now seems as though they are near-ubiquitous.
The purpose of the monograph is to provide access to generalized linear models for loss reserving but initially with strong emphasis on the chain ladder. The chain ladder is formulated in a GLM context, as is the statistical distribution of the loss reserve. This structure is then used to test the need for departure from the chain ladder model and to formulate any required model extensions.
This paper provides an accessible account of potential pitfalls in the use of predictive models in property and casualty insurance. With a series of entertaining vignettes, it illustrates what can go wrong. The paper should leave the reader with a better appreciation of when predictive modeling is the tool of choice and when it needs to be used with caution. Keywords: Predictive modeling, GLM