Browse Research

Viewing 76 to 100 of 177 results
In this paper we rigorously investigate the common shock, or contagion, model, for correlating insurance losses. In addition, we develop additional theory which describes how the common shock model can be incorporated within a larger set of distributions. We also address the issue of calibrating contagion models to empirical data. To this end, we propose several procedures for calibrating contagion models using real-world industry data.
NCCI recently completed an extended review of its Experience Rating (ER) Plan. Although no major changes had been made for many years, testing indicated that ER Plan performance was still generally good. The primary cause of deteriorating performance was the use of a fixed split point between primary and excess losses while average claim severity increased dramatically.
Consideration of parameter risk is particularly important for actuarial models of uncertainty. That is because—unlike process risk—parameter risk does not diversify when modeling a large volume of independent exposures. Without consideration of parameter risk, decision makers may be tempted to underwrite higher volumes as a result of the apparent high degree of predictability in the mean outcome.
Interaction with actuarial models by both actuaries and non-actuaries is inevitable and requires careful study so that these models may better serve their purpose. Yet empirical and scientific investigations into how experts make their judgments are rarely reported in actuarial science literature.
This paper presents and compares different risk classification models for the frequency and severity of claims employing regression models for location, scale and shape. The differences between these models are analyzed through the mean and the variance of the annual number of claims and the costs of claims of the insureds, who belong to different risk classes and interesting results about claiming behavior are obtained.
Actuaries have used the so-called “square root rule” for the credibility for many years, even though the “F” value can take any value, and its assumption that the data receiving the complement of credibility is stable is often violated. Best estimate credibility requires fewer or no assumptions, but often requires certain key constants.
A continuous version of Sherman’s discrete inverse power curve model for loss development is defined. This continuous version, apparently unlike its discrete counterpart, has simple formulas for cumulative development factors, including tail factors. The continuous version has the same tail convergence conditions and basic analytical properties as the discrete version.
This paper discusses a methodology of calculating actuarial housing values, with the goal of helping mortgage lenders to gauge departures of housing market values from the fundamentals, and assisting policymakers with tools for implementing counter-cyclical policies.
In this paper we explore a method to model the financial risks of holding portfolios of long-term temperature derivatives for any subset of the 30 North American cities whose derivatives are actively traded on the Chicago Mercantile Exchange (CME).
The infinite product of the age-to-age development factors in Sherman’s inverse power curve model is proven to converge to a finite number when the power parameter is less than ?1, and alternatively to diverge to infinity when the power parameter is ?1 or greater.
Recently, Parsa and Klugman (2011) proposed a generalization of ordinary least squares regression, which they called copula regression. Though theoretically appealing, implementation, especially calibration, of copula regression is generally more involved than for generalized linear models. In this paper a linear approximation to copula regression, for which implementation is similar to that for least squares regression, will be introduced.
Double chain ladder, introduced by Martínez-Miranda et al. (2012), is a statistical model to predict outstanding claim reserve. Double chain ladder and Bornhuetter-Ferguson are extensions of the originally described double chain ladder model which gain more stability through including expert knowledge via an incurred claim amounts triangle.
Quantile testing is a key technique for fitting parameters and testing performance in workers compensation experience rating and the number of quantile intervals must be specified for such a test. A model is developed to compare the error in the quantile test empirical estimates of relative pure loss ratios to the interquantile differences between expected pure loss ratios.
More casualty actuaries would employ the discrete Fourier transform (DFT) if they understood it better. In addition to the many fine papers on the DFT, this paper might be regarded as just one more introduction. However, the topic uniquely explained herein is how the DFT treats the probability of amounts that overflow its upper bound, a topic that others either have not noticed or have deemed of little importance.
Actuaries quite often have to interpolate data to obtain quantities such as loss development factors (LDFs) for maturities in between the maturities included in a loss development triangle, or increased limits factors for limits between the data points used in the increased limits analysis.
In the present paper we consider the claims reserving problem in a multivariate context. More precisely, we apply the multivariate generalization of the well-known credibility model proposed by Bühlmann and Straub (1970) to claims reserving.
Copula models have been popular in risk management. Due to the properties of asymptotic dependence and easy simulation, the t-copula has often been employed in practice. A computationally simple estimation procedure for the t-copula is to first estimate the linear correlation via Kendall’s tau estimator and then to estimate the parameter of the number of degrees of freedom by maximizing the pseudo likelihood function.
Retention is an important factor that impacts both profit and growth of insurance companies. Conventional retention analysis, such as logistic regression, does not distinguish between two types of attrition: mid-term cancellation and end-term nonrenewal. In this paper, the authors propose to use survival analysis to estimate attrition and retention.
When estimating loss reserves, actuaries usually give varying weights to multiple indications to arrive at their final selected indication. The common practice is to give weight to indications that have been developed to their ultimate expected amount.
A firm will replace a physical asset at the end of its useful life. This fact demonstrates that there is a notion of mortality implicit in the way an enterprise manages its physical assets. We propose a theory that there is also an efficient time to replace a physical asset that is random and observable.
This paper back-tests the popular over-dispersed Poisson bootstrap of the paid chain-ladder model from England and Verrall(2002), using data from hundreds of U.S. companies, spanning three decades. The results show that the modeled distributions underestimate reserve risk. We investigate why this may occur, and propose two methods to increase the variability of the distribution to pass the back-test.
Maximum likelihood estimators provide a powerful statistical tool. In this paper we directly deal with non-linear reserving models, without the need to transform those models to make them tractable for linear or generalized linear methods.
Property-casualty insurance companies tend to buy reinsurance; when they do, they must address reinsurance credit risk.
This paper tackles the question: why should split credibility be better than credibility without a split? It corrects previous misunderstandings and presents new formulas showing how parameter uncertainty is reduced by use of unsplit credibility and then how it might be further reduced by introduction of a split. It derives the formulas for unsplit and split credibility when losses follow the widely used collective risk model (CRM).
In many applied claims reserving problems in P&C insurance, the claims settlement process goes beyond the latest development period available in the observed claims development triangle. This makes it necessary to estimate so-called tail development factors which account for the unobserved part of the insurance claims. We estimate these tail development factors in a mathematically consistent way.