CAS Research Leaders Highlight Work Shaping Actuarial Practice
CAS research leaders, including the Vice President for Research & Practice Advancement and the chairs of the organization’s research working groups, were recently asked to identify CAS-sponsored research from the past year that is influencing actuarial practice. Their responses highlight not only individual papers, but also broader trends influencing actuarial practice.
Across their selections, several themes emerge, including the growing use of AI and machine learning, increased focus on data privacy and synthetic data, and new approaches to modeling complex and emerging risks. Research on topics such as cyber risk, severe convective storms, and advanced statistical methods reflects a shift toward more granular data and more sophisticated analytical techniques.
Together, these perspectives provide a snapshot of how actuarial research is evolving, with a stronger emphasis on practical application and on addressing challenges tied to data, regulation, and changing risk environments. Read the leaders’ picks below.
|
Morgan Bugbee, CAS Vice President for Research & Practice Advancement I found Filimonov's “Bridging Data Divides: AI as a New Paradigm for Unstructured Data” and Peiris, Jeong, and Zou's “Development of Telematics Safety Scores in Accordance with Regulatory Compliance” to be particularly exciting. Filimonov provides a clear guide on developing structured data features from unstructured text, offering an ideal introduction to techniques like cosine similarity and RAG. Similarly, the second paper impressively combines telematics, neural network embeddings, and constrained models into a single study. |
|
Mario DiCaro, Artificial Intelligence Working Group There is a theme in data: growing access to granular data competing with growing regulatory oversight in data privacy. I loved the research highlighting use of GANs to navigate that tension and the use of synthetic data. The other area that I was glad to see being researched is cyber risk. The use of epidemiological models to look at cyber contagion is an interesting way to handle portfolio risk and system breakdown during a massive correlated attack. |
|
Andy Feng, Risk Working Group The paper “Enterprise Risk Management through Deterministic Scenario Analysis” validates a long-held view that cyber risk for most insurance companies manifests primarily as an earnings event rather than a capital event. Using the study of cyber risk as an example, the paper illustrates how deterministic scenario analysis remains a relevant and practical ERM tool in a world of increasingly complex models. |
|
Kenneth Hsu, Open-Source Projects Working Group I believe actuaries are among the best appliers of data science, yet there is often a gap between explaining theoretical data science ideas and practical implementation. In the monograph Practical Mixed Models for Actuaries, Ernesto Schirmacher shows how hierarchical models can offer advantages over traditional generalized linear models by connecting them to credibility concepts, which are familiar to many actuaries already. He further open-sources concrete, relatable, and reproducible examples with data, making the monograph especially valuable for practitioners. |
|
Heather Kanzlemar, Climate & Sustainability Working Group Both Advanced Analytics in Insurance: Utilizing Building Footprints Derived from Machine Learning and High-Resolution Imagery and Developing Rates for the Severe Convective Storm Peril in Property Insurance show that precision matters for climate-sensitive exposures. Whether it’s using building footprints instead of parcel centroids for flood and wildfire risk, or treating hail, tornado, and wind as distinct drivers in severe convective storm ratemaking, more granular data and analytics can materially change expected loss estimates — and therefore indicated rates. |
|
Aran Paik, Ratemaking Working Group Zamstein's “Enhancing Actuarial Ratemaking with Synthetic Data for Privacy Preservation” about synthetic data is very practical and shows how anonymizing data can be done. Besides for privacy reasons, the approach can also help address credibility and model evaluation. |
|
Chandu Patel, Reserves Working Group Gross’s “The Development and Use of Claim Life Cycle Model” starts with a predictive modeling framework that describes the full lifecycle of a claim. Modeling provides numerous benefits, including greater reliability of reserve estimates, faster recognition of mix changes, and avoidance of problems in pricing due to differences in development. Component development and emergence models in conjunction with simulation form an alternative framework for generating estimates of reserves. |
|
Juntao Zhang, Risk Working Group The article "Quantifying Correlations: Empirical Analysis of Insurance Data" stands out to me because it provides invaluable empirical benchmarks for correlation factors among various risks in the economic capital model, giving actuaries a practical benchmark instead of relying on purely theoretical inputs. |
The selections above highlight a field that is adapting to new data sources, new risks, and new analytical tools. CAS-sponsored research continues to play a key role in that evolution. For more information on CAS research, visit casact.org/research.