Browse Research

Viewing 51 to 75 of 7676 results
Capital allocation is an essential task for risk pricing and performance measurement of insurance business lines. This paper provides a survey of existing capital allocation methods, including common approaches based on the gradients of risk measures and economic allocation arising from counterparty risk aversion. We implement all methods in two example settings: binomial losses and loss realizations from a catastrophe reinsurer.
In this paper, we propose a generalization of the individual loss reserving model introduced by Pigeon et al. (2013) considering a discrete time framework for claims development. We use a copula to model the potential dependence within the development structure of a claim, which allows a wide variety of marginal distributions. We also add a specific component to consider claims closed without payment.
The concept of risk distribution, or aggregating risk to reduce the potential volatility of loss results, is a prerequisite for an insurance transaction. But how much risk distribution is enough for a transaction to qualify as insurance? This paper looks at different methods that can be used to answer that question and ascertain whether or not risk distribution has been achieved from an actuarial point of view.
This paper demonstrates an approach to apply the lasso variable shrinkage and selection method to loss models arising in actuarial science. Specifically, the group lasso penalty is applied to the GB2 distribution, which is a popular distribution used often in actuarial research nowadays.
The classical credibility theory circumvents the challenge of finding the bona fide Bayesian estimate (with respect to the square loss) by restricting attention to the class of linear estimators of data. See, for example, Bühlmann and Gisler (2005) and Klugman et al. (2008) for a detailed treatment.
The COVID-19 pandemic not only induced widespread anxiety and inactivity in the global economy but also raised questions about how society could better absorb financial damages arising from future catastrophic events. This paper addresses some important questions such as what a risk is, how risks can be transferred away from individuals and business owners, and what makes a risk insurable.
Table of Contents 2021 Reinsurance Call Paper Prize Winner The Roulette Wheel and the Drunken Sailor: Principal-Agent Theory and its Ramifications for Insurance and Reinsurance Risk Management Neil M. Bodoff, FCAS, MAAA
  Properly modeling changes over time is essential for forecasting and important for any model or process with data that span multiple time periods. Despite this, most approaches used are ad hoc or lack a statistical framework for making accurate forecasts.
In this paper we will analyze the model introduced in Siegenthaler (2017). The author promises to present estimators for the one-year (solvency) as well as the ultimate uncertainty of estimated ultimate claim amounts that neither depend on any claim data nor on the reserving method used to estimate these ultimates. Unfortunately, the model cannot fulfill this promise: it only corrects for some bias in the estimated ultimates.
  An excess loss factor is a measure of expected loss that is in excess of a given per-occurrence limit. The National Council on Compensation Insurance (NCCI) uses excess loss factors in its retrospective rating plan as well as in aggregate and class ratemaking.
Because insured losses are positive, loss distributions start from zero and are right-tailed. However, residuals, or errors, are centered about a mean of zero and have both right and left tails. Seldom do error terms from models of insured losses seem normal. Usually they are positively skewed, rather than symmetric. And their right tails, as measured by their asymptotic failure rates, are heavier than that of the normal.
Several approximations for the distribution of aggregate claims have been proposed in the literature. In this paper, we have developed a saddlepoint approximation for the aggregate claims distribution and compared it with some existing approximations, such as NP2, gamma, IG, and gamma-IG mixture.
The concept of bias-variance tradeoff provides a mathematical basis for understanding the common modeling problem of under-fitting vs. overfitting. While bias-variance tradeoff is a standard topic in machine learning discussions, the terminology and application differ from that of actuarial literature. In this paper we demystify the bias-variance decomposition by providing a detailed foundation for the theory.
This paper extends uniform-exposure credibility theory by making quadratic adjustments that take into account the squared values of past observations. This approach amounts to introducing nonlinearities in the framework, or to consider-ing higher-order cross-moments in the computations. We first describe the full parametric approach and, for illustration, we examine the Poisson-gamma and Poisson-Pareto cases.
Risk aggregation is virtually everywhere in insurance applications. Indeed, in the vast majority of situations, insurers are interested in the properties of the sums of the risks they are exposed to, rather than in the stand-alone risks per se. Unfortunately, the problem of formulating the probability distributions of the aforementioned sums is rather involved, and as a rule does not have an explicit solution.
The COVID-19 pandemic not only induced widespread anxiety and inactivity in the global economy but also raised questions about
CAS E-Forum, Winter 2021 Featuring Three Independent Research Papers
The world is going through an extraordinary event. Since it first appeared in Wuhan, China, in late 2019 (“First Covid-19 Case Happened in November, China Government Records Show - Report” 2020), the coronavirus has spread rapidly to most of the world’s population. Indeed, one of the difficulties of writing an article like this is to keep up with the pace of change.
CAS E-Forum, Summer 2020 Featuring Five Reserves Call Papers and Four Independent Research Papers
This paper discusses an alternative approach to utilizing and credibility weighting the excess loss information for large account pricing. The typical approach is to analyze the burn costs in each excess layer directly (see Clark 2011, for example). Burn costs are extremely volatile in addition to being highly right skewed, which does not perform well with linear credibility methods, such as Buhlmann-Straub or similar methods (Venter 2003).
The standard method for calculating reserves for permanently injured worker benefits (indemnity and medical) is a combination of adjuster-estimated case reserves and reserves for incurred but not reported claims (IBNR) using a triangle method. There has been some interest in other reserving methodologies based on a calculation of future payments for the expected lifetime of the injured worker using a table of mortality rates.
This paper demonstrates a Bayesian approach for estimating loss costs associated with excess of loss reinsurance programs.