Skip to main content
AI is a powerful technology with the potential to optimize your organization’s automation, efficiency, insights, and resources. It is not to be taken lightly, though. Successful AI implementation requires a commitment to thoughtful and intentional usage.

We’re in the midst of an AI gold rush. Business sectors around the world are looking for an AI-fueled evolution that yields greater efficiencies and insights. The insurance industry and the risk management discipline are no exceptions.   

According to The State of AI in Insurance, a report from the market research and analyst firm Forrester, 63% of data and analytics decision-makers at insurance companies report their organization is adopting AI, with another 24% planning to follow suit. It’s not unlike the RMIS wave of the mid-1980s, when insurance and risk professionals jumped on a technology bandwagon that promised to take data and risk information to a new level of performance and functionality.   

But even with all the hype surrounding AI, the rose-colored glasses are already starting to come off. With capabilities for both automating processes and generating consumer and operational insight, AI technology is far-reaching, elusive, and often misunderstood. There’s disagreement about how to define it and confusion as to what benefits users should expect from it. The Forbes article What Successful Early Adopters of Generative AI Have Learned points to the fact that 78% of surveyed global executives report lacking the skills or knowledge to successfully utilize AI in their businesses. 

With the right approach, AI tools like natural language processing can move mountains in the insurance industry and make significant contributions to your organization’s risk management program. During a recent panel discussion hosted by Origami Risk, 11 industry analyst experts from the risk and insurance sector weighed in on how organizations are implementing AI, its risks, and what it takes to do it right. Here’s what they had to say.  


The key to unlocking AI’s value is to first gain a clear understanding of what AI is (and isn’t), determine the value it can offer your organization, and partner with those who can help you put it into practice. Consider these three key areas: 

1. Tailor AI to your organization to reap maximum benefits. 

Take the time to learn about and experiment with the different AI tools and evaluate how their offerings can benefit your business model. Insurers can harness the power of AI in claims processing and policy underwriting. Risk managers can use AI for predictive analytics of risk trends or to flag risks in real time. In addition, AI can be used to price products, perform market research, analyze risk, and prepare account statements. Redhand Advisors’ article A Beginner’s Guide to Artificial Intelligence in Risk Management notes potential AI use cases in the insurance industry. These include:  

  • Predictive analytics informing the likelihood of fraud, litigation, or expected costs for claims  

  • Automatic response generation to customer inquiries 

  • Automated data gathering and analysis to inform underwriting  

2. Make clean data the foundation.  

Accurate, complete, structured, good-quality data forms the best foundation for successful AI implementation. The Forbes article Want your company’s A.I. project to succeed? Don’t hand it to the data scientists, says this CEO notes that an estimated 83% to 92% of AI projects fail because poor-quality data obscures the clarity sought through AI integration. Data with missing values, inaccuracies, or inconsistencies lead to wrong predictions, which can result in: 

  • Underpriced or overpriced policies 

  • Inefficient fraud detection 

  • Delays and mistakes in claims processing 

  • Non-compliance with legal requirements 

  • Reduced trust in the company’s services  

Beyond the quality of your data, there’s another darker side to consider here: More data incurs more risk. Increased data use creates highly valuable targets for cyberattacks that can spread up and down the vendor chain even if your organization isn’t using AI. Innovative strategies and processes must be used to mitigate these novel risks, as there is no historical precedent to learn from.  

3. Consider the (lack of) governance around AI and be proactive with your policies.  

The European Union is the global leader in AI regulation. Their AI Act is the first comprehensive AI law that aims to ensure AI systems are safe and transparent. The AI Act provides a risk framework for AI systems to be categorized in levels ranging from limited risk up to unacceptable risk, with corresponding levels of assessment and oversight required for each, according to the European Parliament article EU AI Act: first regulation on artificial intelligence.  

By comparison, the United States regulatory environment around AI use is non-existent, even with tremendous pressure to enact laws. There is some movement toward a more controlled environment, though. The Colorado Department of Insurance’s recently adopted regulation 3 CCR 702-10 on Unfair Discrimination aims to prevent race-based discrimination from occurring with life insurers’ use of AI. While Colorado is the first state to introduce some formal regulation around the use of AI in the insurance industry, the Electronic Privacy Information Center’s The State of State AI Laws: 2023 notes that many states have addressed AI governance and passed laws around its use, with more expected to formalize AI regulations in the near future. In October 2023, President Biden advanced national regulatory progress with an Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence that calls for AI safety, security, and privacy standards and advances equity and civil rights, putting the onus on AI creators and government agencies to create parameters.  

With the EU as today’s gold standard and the United States taking early steps towards regulation, it’s important for your organization to develop governance procedures for the use of AI. To achieve this, you’ll need to determine everywhere AI is used — or everywhere it will be used — in your processes, recognize and fix (or account for) bias in models, and establish controls to manage AI-related risks.  


The Redhand Advisors article Part I: Developing an AI Strategy for Risk Management provides guidance on how to get started. Here are some key steps:   

  1. Build a roadmap of your business’ potential AI use cases and benefits.   

  2. Determine data and technology requirements you’ll need to successfully implement AI in your organization. In addition, note the risks that come with using those technologies and data.  

  3. Stay up to date on governance laws and consider establishing your own rules and regulations within your business.  

AI is a powerful technology with the potential to optimize your organization’s automation, efficiency, insights, and resources. It is not to be taken lightly, though. Successful AI implementation requires a commitment to thoughtful and intentional usage. 

To learn more about the smart and safe implementation of AI technology in your organization, download our eBook on the topic “Get your house in order for… AI” and contact us to request more information. 

Javascript Code