Browse Research

Viewing 151 to 175 of 177 results
Longitudinal data (or panel data) consist of repeated observations of individual units that are observed over time. Each individual insured is assumed to be independent but correlation between contracts of the same individual is permitted.
Significant work on the modeling of asset returns and other economic and financial processes is occurring within the actuarial profession, in support of risk-based capital analysis, dynamic financial analysis, pricing embedded options, solvency testing, and other financial applications. Although the results of most modeling efforts remain proprietary, two models are in the public domain.
The present paper provides a unifying survey of some of the most important methods of loss reserving based on run-off triangles and proposes the use of a family of such methods instead of a single one.
All of us, especially those of us working in insurance, are constantly exposed to the results of small samples from skewed distributions. The majority of our customers will see small sample results below the population mean. Also, the most likely sample average value for any small sample from a skewed population will be below the mean of the skewed population being sampled. Experienced actuaries are aware of these issues.
In this article, we present a Bayesian approach for calculating the credibility factor. Unlike existing methods, a Bayesian approach provides the decision maker with a useful credible interval based on the posterior distribution and the posterior summary statistics of the credibility factor, while most credibility models only provide a point estimate.
Dynamic financial analysis (DFA) has become an important tool in analyzing the financial condition of insurance companies. Constant development and documentation of DFA tools has occurred during recent years. However, several questions concerning the implementation of DFA systems have not yet been answered in the DFA literature. One such important issue is the consideration of management strategies in the DFA context.
This paper examines the impact of capital level on policy premium and shareholder return. If an insurance firm has a chance of default, it covers less liability than a default-free firm does, so it charges less premium. We explain why policyholders require greater premium credits than the uncovered liabilities. In a default-free firm, if frictional costs are ignored, we prove shareholders are indifferent to the capital level.
When focusing on reserve ranges rather than point estimates, the approach to developing ranges across multiple lines becomes relevant. Instead of being able to simply sum across the lines, we must consider the effects of correlations between the lines. This paper presents two approaches to developing such aggregate reserve indications. Both approaches rely on a simulation model.
We model a claims process as a random time to occurrence followed by a random time to a single payment. Since accident year payout data available is aggregated by development year rather than by payment lag, we calculate those probabilities and parameterize the payout lag time distribution to maximize the fit to data. General formulae are given for any distribution, but we use a piecewise linear continuous distribution.
In this paper, we compare the point of view of the regulator and the investors about the required solvency level of an insurance company. We assume that the required solvency level is determined using the Tail Value at Risk and analyze the diversification benefit, both on the required capital and on the residual risk, when merging risks. To describe the dependence structure, we use a range of various copulas.
This paper proposes a method for the continuous random modeling of loss index triggers for cat bonds.
“Munich Chain Ladder” by Dr. Quarg and Dr. Mack is being reprinted in Variance to give this important paper wider visibility within the actuarial community. The editors of Variance invited the authors to submit their paper for republication because we believe that the techniques described in their work should be known to all actuaries doing reserve analysis. We also hope to stimulate further research in this area.
One of the most commonly used data mining techniques is decision trees, also referred to as classification and regression trees or C&RT. Several new decision tree methods are based on ensembles or networks of trees and carry names like TreeNet and Random Forest. Viaene et al.
In this study, we propose a flexible and comprehensive iteration algorithm called “general iteration algorithm” (GIA) to model insurance ratemaking data.
This paper covers experiences in modeling mortgage insurance claims. In Section 2, mortgage insurance claims are considered an absorbing state in a Markov chain that involves transitions between the states of healthy, in arrears, property in possession, property sold, loan discharged, and claim.
Extended service contracts and their programs continue to evolve and expand to cover more and more products. This paper is intended to be a basic primer for the actuary or risk professional interested in either working in or understanding this area. We discuss the general structure of service contract programs and highlight features that should be considered in the review of the financial solidity of such programs.
This paper summarizes key results from the Report of the Casualty Actuarial Society (CAS) Research Working Party on Risk Transfer Testing. The Working Party defined and described a structured process of elimination to narrow down the field of reinsurance contracts that have to be tested for risk transfer.
This paper applies a bivariate lognormal distribution to price a property policy with property damage and business interruption cover subject to an attachment point, separate deductibles, and a combined limit. Curve-fitting tasks for univariate probability distributions are compared with the tasks required for multivariate probability distributions. This is followed by a brief discussion of the data used, data-related issues, and adjustments.
This paper demonstrates a Bayesian method for estimating the distribution of future loss payments of individual insurers.
Over the past twenty years many actuaries have claimed and argued that the chain-ladder method of loss reserving is biased; nonetheless, the chain-ladder method remains the favorite tool of reserving actuaries. Nearly everyone who acknowledges this bias believes it to be upward. Although supporting these claims and beliefs, the author proposes herein to deal with two deeper issues.
This paper presents a framework for stochastically modeling the path of the ultimate loss ratio estimate through time from the inception of exposure to the payment of all claims. The framework is illustrated using Hayne’s lognormal loss development model, but the approach can be used with other stochastic loss development models.
The tax shields from debt financing reduce the cost of operations for firms with low cost of bankruptcy. State regulation prevents insurers from using long-term debt as statutory surplus, to ensure sufficient equity capital to meet policyholder obligations. Constraints on regulatory capital force policyholders to fund high tax costs on insurers and reduce the market forces that support solvency.
While accounting principles and actuarial standards of practice are all well designed, they provide only broad guidance to the actuary on what is “reasonable.” This broad guidance is based on the principle that “reasonable” assumptions and methods lead to “reasonable” estimates.
This paper applies the exponential dispersion family with its associate conjugates to the claims reserving problem. This leads to a formula for the claims reserves that is equivalent to applying credibility weights to the chain-ladder reserves and Bornhuetter-Ferguson reserves.  
This paper shows how expert opinion can be inserted into a stochastic framework for loss reserving. The reserving methods used are the chain-ladder and Bornhuetter-Ferguson, and the stochastic framework follows England and Verrall [8]. Although stochastic models have been studied, there are two main obstacles to their more frequent use in practice: ease of implementation and adaptability to user needs.