Browse Research
Viewing 151 to 175 of 188 results
2009
This paper investigates the practical aspects of applying the second-order Bayesian revision of a generalized linear model (GLM) to form an adaptive filter for claims reserving. It discusses the application of such methods to three typical models used in Australian general insurance circles. Extensions, including the application of bootstrapping to an adaptive filter and the blending of results from the three models, are considered.
2009
The paper considers a model with multiplicative accident period and development period effects, and derives the ML equations for parameter estimation in the case that the distribution of each cell of the claims triangle is a general member of the Tweedie family.
2009
This paper describes a new approach to capital allocation; the catalyst for this new approach is a new formulation of the meaning of holding Value at Risk (VaR) capital. This new formulation expresses the firm’s total capital as the sum of many granular pieces of capital, or “percentile layers of capital.” As a result, one must allocate capital separately on each layer and perform the capital allocation across all layers.
2009
In this paper we define a specific measure of error in the estimation of loss ratios; specifically, we focus on the discrepancy between the original estimate of the loss ratio and the ultimate value of the loss ratio. We also investigate what publicly available data can tell us about this measure.
2009
A number of methods to measure the variability of property-liability loss reserves have been developed to meet the requirements of regulators, rating agencies, and management. These methods focus on nominal, undiscounted reserves, in line with statutory reserve requirements. Recently, though, there has been a trend to consider the fair value, or economic value, of loss reserves.
2009
Often in non-life insurance, claims reserves are the largest position on the liability side of the balance sheet. Therefore, the prediction of adequate claims reserves for a portfolio consisting of several run-off subportfolios from dependent lines of business is of great importance for every non-life insurance company.
2009
Fundamentally, estimates of claim liabilities are forecasts subject to estimation errors. The actuary responsible for making the forecast must select and apply one or more actuarial projection methods, interpret the results, and apply judgment. Performance testing of an actuarial projection method can provide empirical evidence as to the inherent level of estimation error associated with its forecasts.
2009
The heavy-tailed nature of insurance claims requires that special attention be put into the analysis of the tail behavior of a loss distribution. It has been demonstrated that the distribution of large claims of several lines of insurance have Pareto-type tails. As a result, estimating the tail index, which is a measure of the heavy-tailedness of a distribution, has received a great deal of attention.
2009
Often in non-life insurance, claim reserves are the largest position on the liability side of the balance sheet. Therefore, the estimation of adequate claim reserves for a portfolio consisting of several run-off subportfolios is relevant for every non-life insurance company.
2009
A timeline formulation of simulation is where events happen one at a time at definite times, and therefore in a definite time order. Simulation in a timeline formulation is presented in theory and practice. It is shown that all the usual simulation results can be obtained and many new forms can be expressed simply.
2008
Dynamic valuation models for the computation of optimum fair premiums are developed using a new framework. The concept of fair premiums which are also “best” is introduced. Optimum fair premiums are defined as the minimum discounted losses for an insurance firm or industry. This notion extends the discrete and continuous discounted cash flow models in many ways.
2008
Longitudinal data (or panel data) consist of repeated observations of individual units that are observed over time. Each individual insured is assumed to be independent but correlation between contracts of the same individual is permitted.
2008
Significant work on the modeling of asset returns and other economic and financial processes is occurring within the actuarial profession, in support of risk-based capital analysis, dynamic financial analysis, pricing embedded options, solvency testing, and other financial applications. Although the results of most modeling efforts remain proprietary, two models are in the public domain.
2008
The present paper provides a unifying survey of some of the most important methods of loss reserving based on run-off triangles and proposes the use of a family of such methods instead of a single one.
2008
All of us, especially those of us working in insurance, are constantly exposed to the results of small samples from skewed distributions. The majority of our customers will see small sample results below the population mean. Also, the most likely sample average value for any small sample from a skewed population will be below the mean of the skewed population being sampled. Experienced actuaries are aware of these issues.
2008
In this article, we present a Bayesian approach for calculating the credibility factor. Unlike existing methods, a Bayesian approach provides the decision maker with a useful credible interval based on the posterior distribution and the posterior summary statistics of the credibility factor, while most credibility models only provide a point estimate.
2008
Dynamic financial analysis (DFA) has become an important tool in analyzing the financial condition of insurance companies. Constant development and documentation of DFA tools has occurred during recent years. However, several questions concerning the implementation of DFA systems have not yet been answered in the DFA literature. One such important issue is the consideration of management strategies in the DFA context.
2008
This paper examines the impact of capital level on policy premium and shareholder return. If an insurance firm has a chance of default, it covers less liability than a default-free firm does, so it charges less premium. We explain why policyholders require greater premium credits than the uncovered liabilities. In a default-free firm, if frictional costs are ignored, we prove shareholders are indifferent to the capital level.
2008
When focusing on reserve ranges rather than point estimates, the approach to developing ranges across multiple lines becomes relevant. Instead of being able to simply sum across the lines, we must consider the effects of correlations between the lines. This paper presents two approaches to developing such aggregate reserve indications. Both approaches rely on a simulation model.
2008
We model a claims process as a random time to occurrence followed by a random time to a single payment. Since accident year payout data available is aggregated by development year rather than by payment lag, we calculate those probabilities and parameterize the payout lag time distribution to maximize the fit to data. General formulae are given for any distribution, but we use a piecewise linear continuous distribution.
2008
In this paper, we compare the point of view of the regulator and the investors about the required solvency level of an insurance company. We assume that the required solvency level is determined using the Tail Value at Risk and analyze the diversification benefit, both on the required capital and on the residual risk, when merging risks. To describe the dependence structure, we use a range of various copulas.
2008
This paper proposes a method for the continuous random modeling of loss index triggers for cat bonds.
2008
“Munich Chain Ladder” by Dr. Quarg and Dr. Mack is being reprinted in Variance to give this important paper wider visibility within the actuarial community. The editors of Variance invited the authors to submit their paper for republication because we believe that the techniques described in their work should be known to all actuaries doing reserve analysis. We also hope to stimulate further research in this area.
2008
One of the most commonly used data mining techniques is decision trees, also referred to as classification and regression trees or C&RT. Several new decision tree methods are based on ensembles or networks of trees and carry names like TreeNet and Random Forest. Viaene et al.
2007
In this study, we propose a flexible and comprehensive iteration algorithm called “general iteration algorithm” (GIA) to model insurance ratemaking data.