Browse Research

Viewing 2226 to 2250 of 7695 results
2006
Motivation: For property casualty insurers, loss reserves are by far their largest liability. These are actuarial estimates of future loss payments resulting form accidents that have already occurred. In fact, the actual future loss payments may deviate - sometimes substantially - from the amount that was estimated.
2006
Motivation: The paper will address the issue of estimating the uncertainty in the run off of individual large claims in insurance portfolios, which is often the primary source of uncertainty in the reserving risk component of insurance risk. Method: The paper begins by reviewing current methodologies for estimating the uncertainty in loss reserves.
2006
Motivation: This paper looks at the problem of measuring correlation between reserve segments. The research was motivated by the 2005 CAS Working Party on Reserve Variability. Method: Using a random-walk time series model for inflation, we can estimate the variance of a stream of inflation-sensitive payments. The same calculations can be performed to estimate the covariance between two streams of payments.
2006
Motivation: The CAS Research Working Party on Correlation and Dependencies Among All Risk Sources has been charged to "lay the theoretical and experimental foundation for quantifying variability when data is limited, estimating the nature and magnitude of dependence relationships, and generating aggregate distributions that integrate these disparate risk sources."
2006
Motivation: One of the newest areas of data mining is text mining. Text mining is used to extract information from free form text data such as that in claim description fields. This paper introduces the methods used to do text mining and applies the method to a simple example.
2006
The marginal approach to risk and return analysis compares the marginal return from a business decision to the marginal risk imposed. Allocation distributes the total company risk to business units and compares the profit/risk ratio of the units. These approaches coincide when the allocation actually assigns the marginal risk to each business unit, i.e., when the marginal impacts add up to the total risk measure.
2006
This paper reports on our research into the issues associated with establishing standards for materiality associated with claim liability estimates. In our research we explored several alternative methods for developing benchmarks for materiality. Rather than restrict ourselves to theoretical considerations, we tested the various methods empirically using public data for individual companies and various lines of business.
2006
The present paper provides a unifying survey of some of the most important methods and models of loss reserving which are based on run-off triangles. The starting point is the thesis that the use of run-off triangles in loss reserving can be justified only under the assumption that the development of the losses of every accident year follows a development pattern which is common to all accident years.
2006
The passage and implementation of the Sarbanes-Oxley Act of 2002 was the most significant landmark legislation in securities regulation and corporate governance in the US since the SEC Act of 1934.
2006
While the actuarial literature devoted to stochastic loss reserving has been developing at an impressive rate, much of this literature has been devoted to the statistical analysis of summarized loss triangles. This restriction limits the benefits that modern statistical techniques can bring to the subject of loss reserving. This paper will sketch one possible framework for estimating future claims payments using claim-level data.
2006
Increased limits ratemaking focuses on the development of appropriate charges for various limits of liability coverages. Common liability lines of insurance include Personal Automobile Liability, Commercial Automobile Liability, General Liability, and Medical Professional Liability.
2006
Deductible clauses are common in property and casualty insurance policies. This study note illustrates some of the reasons for the rise in popularity of deductible policies and examines some considerations in pricing such policies, and outlines a basic illustration of pricing deductible policies.
2006
Final remark on the comments in the article by Thomas Mack, Gerhard Quarg and Christian Braun (ASTIN 36/2, p. 543-552).
2006
We revisit the famous Mack formula [2], which gives an estimate for the mean square error of prediction MSEP of the chain ladder claims reserving method: We define a time series model for the chain ladder method. In this time series framework we give an approach for the estimation of the conditional MSEP. It turns out that our approach leads to results that differ from the Mack formula.
2006
This paper was prepared in response to a call from the American Academy of Actuaries Committee on Property and Liability Financial Reporting (COPLFR). The call requested ideas about how to define and test for risk transfer in short duration reinsurance contracts as required by FAS 113 and SSAP 62.
2006
This chapter discusses an approach to the correlation problem where losses in different lines of insurance are linked by a common variation (or shock) in the parameters of each line's loss model. The chapter begins with a simple common shock model and graphically illustrates the effect of the magnitude of the shocks on correlation.
2006
In recent years a number of "data mining" approaches for modeling data containing nonlinear and other complex dependencies have appeared in the literature. One of the key data mining techniques is decision trees, also referred to as classification and regression trees or CART (Breiman et al, 1993). That method results in relatively easy to apply decision rules that partition data and model many of the complexities in insurance data.
2006
In actuarial applications we often work with loss distributions for insurance products. For example, in P&C insurance, we may develop a compound Poisson model for the losses under a single policy or a whole portfolio of policies. Similarly, in life insurance, we may develop a loss distribution for a portfolio of policies, often by stochastic simulation.
2006
This paper demonstrates a Bayesian method for estimating the distribution of future loss payments of individual insurers.
2006
Characteristic of many reserving methods designed to analyse claims data aggregated by contract or sets of contracts, is the assumption that features typifying historical data are representative of the underwritten risk and of future losses likely to affect the contracts.
2006
This practice note was prepared by the Committee on Property and Liability Financial Reporting (COPLFR) of the American Academy of Actuaries (Academy). It is not an Actuarial Standard of Practice.
2006
We discuss some questionable points of the approach taken in the paper by Buchwalder, Bühlmann, Merz and Wüthrich and come to the conclusion that this approach does not yield an improvement of Mack’s original formula. The main reason is that the new approach disregards the negative correlation of the squares of the development factors. The same applies to the formula by Murphy (PCAS 1994).
2006
This paper addresses some of the problems a majority of retired individuals face: Why and in what proportion should they invest in a life annuity to maximize the utility of their future consumption or a bequest? The market considered in this work is made up of three assets: a life annuity, a risky asset and a cash account.
2006
This article discusses risk classification and develops and discusses a framework for estimating the effects of restrictions on risk classification. It is shown that expected losses due to adverse selection depend only on means, variances and covariances of insurance factors and rates of uptake of insurance. Percentage loadings required to avoid losses are displayed.
2006
We introduce a class of Bayesian infinite mixture models first introduced by Lo (1984) to determine the credibility premium for a non-homogeneous insurance portfolio. The Bayesian infinite mixture models provide us with much flexibility in the specification of the claim distribution. We employ the sampling scheme based on a weighted Chinese restaurant process introduced in Lo et al.