Browse Research

Viewing 126 to 150 of 177 results
We consider Tweedie’s compound Poisson model in a claims reserving triangle in a generalized linear model framework. We show that there exist practical situations where the variance, as well as the mean of the costs, needs to be modeled. We optimize the likelihood function through either direct optimization or through double generalized linear models (DGLM).
Underinsured Motorist (UIM) coverage, also known as Family Protection coverage, is a component of most Canadian personal automobile policies, with similar coverage existing in many American states. Traditional ratemaking methods are not appropriate for UIM due to poor credibility of available data as well as the unique characteristics of the UIM coverage.
This paper proposes a methodology to calculate the credibility risk premium based on the uncertainty of the risk premium (aka pure loss cost, pure premium), as estimated by the standard deviation of the risk premium estimator. An optimal estimator based on the uncertainties involved in the pricing process is constructed.
Property/casualty reserves are estimates of losses and loss development and as such will not match the ultimate results. Sources of error include model error (the methodology used does not accurately reflect the development process), parameter error (incorrect model parameters), and process error (future development is random). This paper provides a comprehensive and practical methodology for quantifying risk that includes all three sources.
This paper offers a methodology for calculating optimal bounds on tail risk probabilities by deriving upper and lower semiparametric bounds, given only the first two moments of the distribution.We apply this methodology to determine bounds for probabilities of two tail events. The first tail event occurs when two financial variables simultaneously have extremely low values.
In this paper, linear mixed models are employed for estimation of structural parameters in credibility context. In particular, Hachemeister’s model and Dannenburg’s crossed classification model are considered. Maximum likelihood (ML) and restricted maximum likelihood (REML) methods are developed to estimate the variance and covariance parameters.
In 2009, in the aftermath of the Global Financial Crisis, 140 American banks failed–and hundreds of other banks were classified as “problem institutions” by the FDIC. This has led to numerous books and articles examining the causes of systemic risk in our financial system. In this paper we step back in history, to see what we should have learned from a previous banking crisis, which occurred during the 1980s.
This paper presents a bootstrap approach to estimate the prediction distributions of reserves produced by the Munich chain ladder (MCL) model. The MCL model was introduced by Quarg and Mack (2004) and takes into account both paid and incurred claims information. In order to produce bootstrap distributions, this paper addresses the application of bootstrapping methods to dependent data, with the consequence that correlations are considered.
Robust statistical procedures have a growing body of literature and have been applied to loss severity fitting in actuarial applications. An introduction of robust methods for loss reserving is presented in this paper. In particular, following Tampubolon (2008), reserving models for a development triangle are compared based on the sensitivity of the reserve estimates to changes in individual data points.
In this paper we construct a stochastic model and derive approximation formulae to estimate the standard error of prediction under the loss ratio approach of assessing premium liabilities. We focus on the future claims component of premium liabilities and examine the weighted and simple average loss ratio estimators.
The behavior of competing insurance companies investigating insurance fraud follows one of several Nash Equilibria under which companies consider the claim savings, net of investigation cost, on a portion, or all, of the total claim. This behavior can reduce the effectiveness of investigations when two or more competing insurers are involved.
Insurers purchase catastrophe reinsurance primarily to reduce underwriting risk in any one experience period and thus enhance the stability of their income stream over time. Reinsurance comes at a cost and therefore it is important to maintain a balance between the perceived benefit of buying catastrophe reinsurance and its cost.
This paper presents a Bayesian stochastic loss reserve model with the following features:
Excess loss factors, which are ratios of expected losses excess of a limit to total expected losses, are used by the National Council on Compensation Insurance (NCCI) in class ratemaking (estimating the expected ratio of losses to payroll for individual workers compensation classifications) and are used by insurance carriers to determine premiums for certain retrospectively rated policies (on policies for which claims used in the premium deter
This paper investigates the practical aspects of applying the second-order Bayesian revision of a generalized linear model (GLM) to form an adaptive filter for claims reserving. It discusses the application of such methods to three typical models used in Australian general insurance circles. Extensions, including the application of bootstrapping to an adaptive filter and the blending of results from the three models, are considered.
The paper considers a model with multiplicative accident period and development period effects, and derives the ML equations for parameter estimation in the case that the distribution of each cell of the claims triangle is a general member of the Tweedie family.
This paper describes a new approach to capital allocation; the catalyst for this new approach is a new formulation of the meaning of holding Value at Risk (VaR) capital. This new formulation expresses the firm’s total capital as the sum of many granular pieces of capital, or “percentile layers of capital.” As a result, one must allocate capital separately on each layer and perform the capital allocation across all layers.
In this paper we define a specific measure of error in the estimation of loss ratios; specifically, we focus on the discrepancy between the original estimate of the loss ratio and the ultimate value of the loss ratio. We also investigate what publicly available data can tell us about this measure.
A number of methods to measure the variability of property-liability loss reserves have been developed to meet the requirements of regulators, rating agencies, and management. These methods focus on nominal, undiscounted reserves, in line with statutory reserve requirements. Recently, though, there has been a trend to consider the fair value, or economic value, of loss reserves.
Often in non-life insurance, claims reserves are the largest position on the liability side of the balance sheet. Therefore, the prediction of adequate claims reserves for a portfolio consisting of several run-off subportfolios from dependent lines of business is of great importance for every non-life insurance company.
Fundamentally, estimates of claim liabilities are forecasts subject to estimation errors. The actuary responsible for making the forecast must select and apply one or more actuarial projection methods, interpret the results, and apply judgment. Performance testing of an actuarial projection method can provide empirical evidence as to the inherent level of estimation error associated with its forecasts.
The heavy-tailed nature of insurance claims requires that special attention be put into the analysis of the tail behavior of a loss distribution. It has been demonstrated that the distribution of large claims of several lines of insurance have Pareto-type tails. As a result, estimating the tail index, which is a measure of the heavy-tailedness of a distribution, has received a great deal of attention.
Often in non-life insurance, claim reserves are the largest position on the liability side of the balance sheet. Therefore, the estimation of adequate claim reserves for a portfolio consisting of several run-off subportfolios is relevant for every non-life insurance company.
A timeline formulation of simulation is where events happen one at a time at definite times, and therefore in a definite time order. Simulation in a timeline formulation is presented in theory and practice. It is shown that all the usual simulation results can be obtained and many new forms can be expressed simply.
Dynamic valuation models for the computation of optimum fair premiums are developed using a new framework. The concept of fair premiums which are also “best” is introduced. Optimum fair premiums are defined as the minimum discounted losses for an insurance firm or industry. This notion extends the discrete and continuous discounted cash flow models in many ways.