Casualty Actuarial Society

Research

Insurance: Mathematics and Economics

Volume 34, Issue 3, Pages 391-548 (18 June 2004)

Ruined moments in your life: how good are the approximations?
Pages 421-447
H. Huang, M. A. Milevsky and J. Wang

What is the probability that your retirement funds will run out before you die? The authors consider this question in the context of paying an annuity certain until death funded by a stochastic investment portfolio. The ruin problem becomes one of determining whether a random-time integral of a geometric Brownian motion falls below a threshold level. The wealth process Wt follows the following stochastic differential equation

dWt = (mWt - 1) dt + s Wt dBt

where m is the average instantaneous return on the investment portfolio, the annuity pays continuously at rate 1 per year, s is the standard deviation of asset returns and dBt is the increment of a Brownian motion over dt. The authors compute a numerical solution by solving the Kolmogorov backwards equation using a generalization of the Crank-Nickolson scheme. They also compute the moments of the relevant integral of a geometric Brownian motion and compare method-of-moments fits using the inverse gamma and lognormal distributions. The integral computed is known to have an inverse gamma distribution when stopped at a random exponential time, so it not too surprising that the authors find an inverse gamma distribution fit works well.

The methods discussed are very similar to earlier papers by Cummins, German and Yor that modeled payouts from catastrophe bonds using a geometric Brownian motion rate of claim payment. These models are also similar to a model of Asian options in Financial Mathematics. An Asian option triggers if the average stock price over a given period is above (or below) the strike price. The stock price is usually modeled with a geometric Brownian motion, and the average is computed using an integral (continuous time average) over the life of the option, which explains the similarities. Because of the flexibility of the geometric Brownian motion for modeling a rate of claims payment, these models could provide a useful basis for modeling claim emergence and development in property and casualty lines.

The author's abstract is available on the journal's web site.

Detecting positive quadrant dependence and positive function dependence
Pages 467-487
A. Janic-Wróblewska, W. C. M. Kallenberg and T. Ledwina

There are many different notions of dependence between variables, all getting to the idea that under positive dependence large values of one variables are more likely to be associated with large values of another than would be the case if the variables where independent. For normally distributed variables correlation provides an adequate test of independence: if X and Y are bivariate normal then X and Y are independent if and only if Cov(X,Y)=0. However, in general, X and Y are independent if and only if Cov(f(X),g(Y))=0 for all f, g in a separating class of functions (equal to linear functions for the bivariate normal). The idea in this paper is to find nice, tractable families of functions f and g which can be used to test for independence and then construct appropriate statistical tests by estimating Cov(f(X), g(Y)). The authors do this in two separate cases: positive quadrant dependence (PQD) and positive function dependence (PFD). PQD asserts that Cov(f(F(X)), g(G(Y)))>=0 for all nondecreasing f, g where F and G are the (empirical) distributions of X and Y. PFD asserts that Cov(f(F(X)), f(G(Y)))>=0 for all f. Both PQD and PFD imply positive linear correlation. The test functions are built from Legendre polynomials.

After constructing appropriate test functions the paper compares their performance against Spearman's rank correlation coefficient and other more recent statistics in a simulation study. The simulated multivariate distributions are chosen from several different multivariate distributions including a model derived from a cubic linear regression model, a bivariate stable distribution, and a heteroscedastic linear regression. The new tests improve the Spearman's rank correlation test.

The author's abstract is available on the journal's web site.

Some new classes of consistent risk measures
Pages 505-516
Marc J. Goovaerts, Rob Kaas, Jan Dhaene and Qihe Tang

An alpha-consistent risk measure is defined as a rule on random variables which satisfies a set of consistency axioms and which is greater than the ath percentile of the risk. The paper begins with a number of different business situations and discusses the appropriate set of axioms for each. For example, in an insurance/reinsurance setting it is desirable for a risk measure to be additive for comonotonic risks, so that layer prices add-up. In other situations the risk measure should be additive for independent risks. For capital allocation a superadditive risk measure may be desirable, because it results in residual capital which can be allocated to shareholders. The authors go on to describe a new risk measure, which they call the Haezendonck risk measure (HRM). They show this measure is monotonic (for stochastic order), positive homogeneous, subadditive for risks X, Y with max(X+Y)=max(X)+max(Y), translation invariant, and preserves convex order (stop-loss order).

The construction of the HRM is a little involved. It depends on a quantile level alpha and a non-negative, strictly increasing continuous function v with v(0)=0, v(1)=1 and v(¥)=¥. Define w=w(X,x) to be the unique solution to

E[ v( (X - x)+ / (w - x)) ] = 1 - a.

The paper proves that such a solution exists. The HRM(X) is then defined as the infimum over all x less than max(X) of w(X,x). The paper ends by relating HRM to tail VaR.

The author's abstract is available on the journal's web site.

Estimating catastrophic quantile levels for heavy-tailed distributions
Pages 517-537
Gunther Matthys, Emmanuel Delafosse, Armelle Guillou and Jan Beirlant

This paper presents a generalization of the Hill estimator for the Pareto-exponent. The paper also provides two extensions of the Hill estimator for censored data. It applies these statistics to extreme quantile estimation. Then it provides a number of second order refinements based on an exponential regression model. It uses a simulation study to compare the various estimators.

The treatment of censored data, such as would result from the application of policy limits to ground up losses, makes the methods described useful for trying to determine the probability of a loss to a layer above the largest historical loss, given only a list of historical losses.

The author's abstract is available on the journal's web site.

Stephen Mildenhall