Viewing 1 to 25 of 7425 results
Maximum likelihood estimation has been the workhorse of statistics for decades, but alternative methods, going under the name “regularization,” are proving to have lower predictive variance. Regularization shrinks fitted values toward the overall mean, much like credibility does. There is good software available for regularization, and in particular, packages for Bayesian regularization make it easy to fit more complex models.
Properly modeling changes over time is essential for forecasting and important for any model or process with data that span multiple time periods. Despite this, most approaches used are ad hoc or lack a statistical framework for making accurate forecasts.
In this paper we will analyze the model introduced in Siegenthaler (2017). The author promises to present estimators for the one-year (solvency) as well as the ultimate uncertainty of estimated ultimate claim amounts that neither depend on any claim data nor on the reserving method used to estimate these ultimates. Unfortunately, the model cannot fulfill this promise: it only corrects for some bias in the estimated ultimates.
An excess loss factor is a measure of expected loss that is in excess of a given per-occurrence limit. The National Council on Compensation Insurance (NCCI) uses excess loss factors in its retrospective rating plan as well as in aggregate and class ratemaking.
In ratemaking, calculation of a pure premium has traditionally been based on modeling frequency and severity in an aggregated claims model. For simplicity, it has been a standard practice to assume the independence of loss frequency and loss severity. In recent years, there has been sporadic interest in the actuarial literature exploring models that depart from this independence.
Because insured losses are positive, loss distributions start from zero and are right-tailed. However, residuals, or errors, are centered about a mean of zero and have both right and left tails. Seldom do error terms from models of insured losses seem normal. Usually they are positively skewed, rather than symmetric. And their right tails, as measured by their asymptotic failure rates, are heavier than that of the normal.
Several approximations for the distribution of aggregate claims have been proposed in the literature. In this paper, we have developed a saddlepoint approximation for the aggregate claims distribution and compared it with some existing approximations, such as NP2, gamma, IG, and gamma-IG mixture.
The concept of bias-variance tradeoff provides a mathematical basis for understanding the common modeling problem of under-fitting vs. overfitting. While bias-variance tradeoff is a standard topic in machine learning discussions, the terminology and application differ from that of actuarial literature. In this paper we demystify the bias-variance decomposition by providing a detailed foundation for the theory.
This paper extends uniform-exposure credibility theory by making quadratic adjustments that take into account the squared values of past observations. This approach amounts to introducing nonlinearities in the framework, or to consider-ing higher-order cross-moments in the computations. We first describe the full parametric approach and, for illustration, we examine the Poisson-gamma and Poisson-Pareto cases.
Risk aggregation is virtually everywhere in insurance applications. Indeed, in the vast majority of situations, insurers are interested in the properties of the sums of the risks they are exposed to, rather than in the stand-alone risks per se. Unfortunately, the problem of formulating the probability distributions of the aforementioned sums is rather involved, and as a rule does not have an explicit solution.
The COVID-19 pandemic not only induced widespread anxiety and inactivity in the global economy but also raised questions about
CAS E-Forum, Winter 2021 Featuring Three Independent Research Papers
CAS E-Forum, Spring 2021 Table of Contents Call Papers on COVID-19 and the P&C Insurance Industry Interplay between Epidemiology and Actuarial Modeling Runhuan Feng, Ph.D., FSA, CERA; Longhao Jin, MS; and Sooie-Hoe Loke, Ph.D. Auto Insurance: Strategic Shift Required for Acquiring and Retaining the Right Customers in a Post COVID-19 World Swarnava Ghosh and Aditya
CAS E-Forum, Summer 2020 Featuring Five Reserves Call Papers and Four Independent Research Papers
This paper discusses an alternative approach to utilizing and credibility weighting the excess loss information for large account pricing. The typical approach is to analyze the burn costs in each excess layer directly (see Clark 2011, for example). Burn costs are extremely volatile in addition to being highly right skewed, which does not perform well with linear credibility methods, such as Buhlmann-Straub or similar methods (Venter 2003).
The standard method for calculating reserves for permanently injured worker benefits (indemnity and medical) is a combination of adjuster-estimated case reserves and reserves for incurred but not reported claims (IBNR) using a triangle method. There has been some interest in other reserving methodologies based on a calculation of future payments for the expected lifetime of the injured worker using a table of mortality rates.
This paper demonstrates a Bayesian approach for estimating loss costs associated with excess of loss reinsurance programs.
This paper proposes a method to derive paid tail factors using incurred tail factors and historical payout patterns. Traditionally, a ratio of paid-to-incurred losses—and its reciprocal, the conversion factor—may be used to convert payments at a specific maturity to incurred losses, prior to attaching an incurred tail factor. The implied paid tail factor would be the product of the incurred tail factor and the selected conversion factor.
This paper proposes efficient statistical tools to detect which risk factors influence insurance losses before fitting a regres-sion model. The statistical procedures are nonparametric and designed according to the format of the variables commonly encountered in P&C ratemaking: continuous, integer-valued (or discrete) or categorical.
Rating areas are commonly used to capture unexplained geographical variability of claims in insurance pricing. A new method for defining rating areas is proposed using a two-part generalized geoadditive model that models spatial effects smoothly using Gaussian Markov random fields. The first part handles zero/nonzero expenses in a logistic model; the second handles nonzero expenses (on log-scale) in a linear model.
An actuarial approach for calculating a relativity based on geographic diversification is presented. The method models correlation as a function of distance between two exposures, and uses that to calculate a risk margin for each policy. It assumes that any premium provision for a company risk margin is currently allocated in proportion to policy risk-free premium, which results in a uniform risk-loading uprate for all policies.
The world is going through an extraordinary event. Since it first appeared in Wuhan, China, in late 2019 (“First Covid-19 Case Happened in November, China Government Records Show - Report” 2020), the coronavirus has spread rapidly to most of the world’s population. Indeed, one of the difficulties of writing an article like this is to keep up with the pace of change.
Hierarchical compartmental reserving models give a parametric framework to describe aggregate insurance claims processes using differential equations.
Although available since the 1990s, cyber insurance is still a relatively new product that is ever-changing. The report uses a conceptual approach to identify and evaluate potential exposure measures for cyber insurance. In particular, the report studies the losses that can arise with each cyber insurance coverage and identifies potential exposure measures related to these losses.