Insurance for AI? AI Risk: An Actuarial Opportunity

by Daniel Drabik, ACAS

Imagine a one-person startup, fully reliant on AI platforms, collapses due to a chain of unchecked errors, turning an initial tiny AI error into total catastrophic loss. Who is responsible? How can policyholders be protected from risks associated with AI use? As the recent releases of GPT-5, Grok Imagine, and Genie 3 demonstrate, AI capabilities continue to accelerate, as do its risks. Karthik Ramakrishnan, leveraging his tech tenure and working alongside Dr. Yoshua Bengio (considered by many a godfather of AI), founded Armilla AI to specialize in risk identification, mitigation, and transfer for AI systems, using advanced model validation and predictive analytics to facilitate trust and robust underwriting. In this interview, discover how actuaries (both credentialed and emerging) are uniquely positioned to validate models, forecast failures, and identify insurance strategies that will manage AI risks and expand the profession into the AI frontier. 

What was your thought process that led to connecting insurance with AI risk(s)?  

After having been in tech for many years, I had the opportunity to build true AI applications in various industries in 2017–2019, with Dr. Yoshua Bengio. During that time, a lot of feedback from clients was that these (AI) systems were not flawless; they are probabilistic and, thus, contain errors. How can a company relinquish control to such machines when they are flawed at the start? Despite this, my thesis was that many currently existing processes will be intelligently automated. While intelligently automated, these processes risk a domino effect. That was one aspect. The other aspect became clear when I casually shared my thesis with friends who worked in the insurance industry. They shared how, if my thesis was true, at some point the insurance industry would need to reconsider how to underwrite a company. While the external coverage or exposures may not change, the source of the risk (AI) and the risk factors would be different; the risk profile would be different; and actuarial models would need to be adjusted and re-underwritten. That is where the connection was made. I never considered insurance before then, but that emerging risk was real. So real, in fact, that I concluded: AI needed insurance.  

Can you expand on the risk of AI processes causing a domino effect and explain why this could be an opportunity for actuaries? 

As AI systems gain traction, more companies are using them extensively. An extreme case could be a solo-founder company that relies entirely on AI tools to operate. If any AI system makes an error, it could trigger a domino effect; a small mistake in one AI could compound across others. Imagine an AI adding an extra zero to a $10,000 insurance claim, making it $100,000. If that error flows through other AI processes, the consequences could be very material. AI’s probabilistic design guarantees occasional failures, raising the question of liability (potentially devastating a small business). So, with widespread AI adoption, these risks become tangible, and insuring against such risks is crucial, presenting an opportunity for actuaries to contribute value. 

How can actuaries play a role and build trust in AI systems? 

Actuaries are trained to see the math behind a model. Building trust in AI systems is the most immediate problem that needs to be solved. That’s where actuaries can help. Actuaries can assist with model validation and model testing (e.g., model assessment across multiple dimensions, such as bias) so that companies can understand 1) how the model is going to fail and 2) the impact that would have. Can we get comfortable with being able to predict how a model is going to fail? A key component to being confident in this is data, and a lot of data is needed. 

If you could share only one key message with the actuarial community about AI insurance, what would that be? 

AI is a very technical risk, and managing this risk overlaps with the education of an actuary in a lot of ways. Actuaries can bring all their skills together to look at a client model and say, “I get this risk,” and assess the level of risk associated with a model. Technical skillsets are critical to assess an AI system and project how the system could fail; actuaries are well suited for this.  

Ramakrishnan’s vision shows how actuaries can expand their roles from traditional risk assessors to “AI guardians,” forecasting failures and pricing AI risks with precision to facilitate trust in AI technologies and protect businesses from the unknown. AI risk isn’t merely an emerging niche; it’s a chance for actuaries to increase their impact.

The CAWG can be reached at cawg@casact.org. Applications to join the working group as a candidate representative are accepted on an annual basis. This year’s applications will be accepted through September 30, 2025. More information can be found on the CAS website.