Access the ICMIF Knowledge Hub homepage. Members are encouraged to bookmark this page for future reference.

Thought leadership article

How insurers can build the right approach for generative AI

Insurers are embracing Generative Artificial Intelligence (GenAI) to automate tasks, personalize services, and gain insights across various business functions. Initial deployments focus on lower-risk areas like actuarial, claims, IT, marketing, and finance. However, the widespread adoption of GenAI presents unique risks, including data security, privacy threats, and regulatory concerns. Challenges arise from biases in large language models (LLMs), data breaches, and the opacity of complex models. An effective governance model and risk management strategy will be a leading practice for organizations that leverage GenAI in insurance as a competitive advantage and to meet their innovation goals by harnessing the power of these uniquely transformational technologies.

Generative artificial intelligence (GenAI) has the potential to revolutionize the insurance industry. While many insurers have moved quickly to use the technology to automate tasks, personalize products and services, and generate new insights, further adoption has become a competitive imperative.

However, as insurers expand their use, they will need to adopt a governance model and risk management approach to address a unique and varied set of risks, including data security, privacy threats and regulatory concerns about ethics and bias, among others.

How insurers are using GenAI in insurance today

Insurers are piloting the adoption of GenAI in many parts of the business.

Insurers are focusing on lower risk internal use cases (e.g., process automation, customer analysis, marketing and communications) as near-term priorities with the goal of expanding these deployments over time. One common objective of first-generation deployments is using GenAI to take advantage of insurers’ vast data holdings.

  • Actuarial and underwriting: streamlining the ingestion and integration of data to free underwriters to focus on high-value work to enable risk selection and more profitable pricing.
  • Claims: automating first-notification-of-loss processes and enhancing fraud detection efforts.
  • Information technology: strengthening cybersecurity by analyzing operations data for attempted fraud, monitoring for external attacks and documenting such attacks for regulatory reporting; generating code and documenting infrastructure and software upgrades.
  • Marketing and customer service: capturing customer feedback, analyzing behavioral patterns and conducting sentiment analysis; tailoring interactions with virtual sales and service representatives; strengthening chatbots’ credibility and ability to resolve complex issues.
  • Finance and accounting: preserving organizational knowledge; real-time analysis and summarization of documents; monitoring investment trends and producing more granular insights into financial and operational performance.

Looking ahead, investments in GenAI will mirror large-scale investments in data (both adding more sources and streams and further integrating existing ones) and in the skills and talent necessary to embed AI more deeply within products, processes and the overall technology environment.

EY logo - used for blog

This article was authored by Stuart Doyle, EY Risk Principal; Chris Raimondo, EY Americas Insurance Technology Leader; and Ryan Moore, Managing Director, Consulting, Risk Consulting, Financial Services Risk Management, Ernst & Young LLP.

This article has been reproduced with the kind permission of ICMIF Supporting Member EY.

Published December 2023

Increasing GenAI adoption

Higher use of GenAI means potential increased risks and the need for enhanced governance

Companies seeking the highest returns on their GenAI investments must prepare to manage the associated risks. The interconnected and dynamic nature of GenAI applications, as well as the complex ecosystems in which they will be deployed, make this a challenging task. Key risks from GenAI include those that arise from the development and use of large language models (LLMs) and existing risks from current AI models, including data complexity, the explainability of largely opaque models, third-party risks, reputational risks, and legal and compliance risks coming from the use of toxic and/or biased information.

Firms and regulators are rightly concerned about the introduction of bias and unfair outcomes. The source of such bias is hard to identify and control, considering the huge amount of data — up to 100 billion parameters — used to pre-train complex models. Toxic information, which can produce biased outcomes, is particularly difficult to filter out of such large data sets.

Most LLMs are built on third-party data streams, meaning insurers may be affected by external data breaches. They may also face significant risks when they use their own data — including personally identifiable information (PII) — to adapt or fine-tune LLMs. Cyber risk, including adversarial prompt engineering, could cause the loss of training data and even a trained LLM model.

Another concern is the foundational nature of third-party AI models, which are trained on massive data sets and need refining for insurance use cases. Industry regulations and ethical requirements are not likely to have been factored in during training of LLM or image-generating GenAI models. Insurers will also need to consider the risk of hallucinations, which would require training around identifying them and appropriately labeling outputs generated by GenAI. Existing data management capabilities (e.g., modeling, storage, processing) and governance (e.g., lineage and traceability) may not be sufficient or possible to manage all these data-related risks.

Regulatory risks and legal liabilities are also significant, especially given the uncertainty about what will be allowed and what companies will be required to report. Many different jurisdictions and authorities have weighed in on or plan to weigh in on the use of GenAI, as will industry groups (see sidebar). Transparency and explainability in both model design and outputs are sure to be common themes.

Today, most carriers are still in the early phases of defining their governance models and controls environments for AI/machine learning (ML). The initial focus is on understanding where GenAI (or AI overall) is or could be used, how outputs are generated, and which data and algorithms are used to produce them.

Establishing a strong governance and risk management approach

Effective risk management governance and an aligned approach are critical for realizing the full business value for GenAI

The three lines of defense and cross-functional teams should feature prominently in the AI/ML risk management approach, with clearly defined accountability for specific areas. The business and the risk teams will need to embrace agile work methods in actively assessing risks, operationalizing controls and prioritizing their reviews based on the most common and highest risk use cases. New talent and expertise in specific areas (e.g., prompt engineering) will be necessary to address all types of GenAI- related risks.

Insurers that invest in the appropriate governance and controls can foster confidence with internal and external stakeholders and promote sustainable use of GenAI to help drive business transformation. Ultimately, the more effective and pervasive the use of GenAI and related technology, the more likely it is that insurers will achieve their growth and innovation objectives.

In moving forward with the development of both their GenAI adoption strategies and risk management frameworks, insurers should consider the following steps:

1. Develop enterprise-wide definitions to identify risks

Effective risk management starts with the ability to identify and define risks. This can be more challenging than it seems as many current applications (e.g., chatbots) do not cleanly fit existing risk definitions. Similarly, AI applications are often embedded in spreadsheets, technology systems and analytics platforms, while others are owned by third parties. Existing inventory identification and management processes (e.g., models, IT applications) can be adjusted with specific considerations for certain AI and ML techniques and key characteristics of algorithms (e.g., dynamic calibration).

2. Embrace cross-functional governance

Cross-functional governance is necessary because no single function or group has full understanding of these interconnected risks or the ability to manage them. Strong operating models will clarify roles and responsibilities for first-line accountability across product, data science, technology and business owners as well as independent risk management functions (e.g., model, compliance, operational). Second-line risk and compliance functions can bring to bear their complementary expertise in working together to understand conceptual soundness across the model lifecycle. Internal audit also has a role to play in ongoing review and testing of controls across the enterprise.

3. Implement an operating model for responsible adoption

Successful GenAI adoption entails having an operating model that directs investments to those applications with the highest ROI and chance of success, while factoring in risk and control considerations. To this end, operating models should be designed to reflect the need for front-line experimentation, exploration and proof-of-concept development, while also ensuring consistent standards for ROI assessment, production and internal controls.

The right operating model increases the chances of successful adoption by helping achieve:

  • Alignment with business strategy;
  • Prudent use of scarce resources;
  • Compliance with relevant policies and regulations.

The key elements of the operating model will vary based on the organizational size and complexity, as well as the scale of adoption plans.

Some insurers looking to accelerate and scale GenAI adoption have launched centers of excellence (CoEs) for strategy and application development. Such units can help foster technical expertise, share leading practices, incubate talent, prioritize investments and enhance governance.

4. Enhance existing risk management and control frameworks to address GenAI-specific risks

The rise of GenAI requires enhancements to existing frameworks for model risk management (MRM), data management (including privacy), and compliance and operational risk management (IT risk, information security, third party, cyber).

For example, existing MRM frameworks may not adequately capture GenAI risks due to their inherent opacity, dynamic calibration and use of large data volumes. The MRM framework should be enhanced to include additional guidance around benchmarking, sensitivity analysis, targeted testing for bias and toxic content.

Similar enhancements for data management, compliance or other operational risk frameworks include data quality, data bias, privacy requirements, entitlement provisions, and conduct-related considerations.

5. Develop risk-based controls to promote innovation and speed to market

Given the many use cases for GenAI, a risk-based approach to governance and controls enables responsible innovation and accelerated speed to market. The fundamental idea is that lower-risk applications may not warrant the same level of review as high-risk applications. This risk-based mindset, when adopted across the first line, can promote GenAI adoption at scale; without it, insurers may delay their adoption of high-ROI use cases and, ultimately, discourage innovation. The criteria to determine the level of risk should consider:

  • Financial, reputational, conduct and/or regulatory compliance impacts;
  • Customer impact;
  • Level of reliance on the applications and complexity of the techniques used;
  • New sources of data (including third party) or unstructured or high-dimensional data;
  • Use in new products and services;
  • Level of technique maturity and development experience;
  • Vendor applications

6. Invest in capabilities that support GenAI adoption and risk management

As part of their adoption strategies, insurers should develop two- to three-year projections on the products, services and processes that will use GenAI and define the necessary capabilities and investments to meet the business needs. The key areas include:

Data management: Centralized teams of data scientists allow application developers to access robust data sets to drive agile application development. Investments in next-generation data strategy and architecture (e.g., data lakes, cloud) will also be a priority.

Modeling infrastructure: Some insurers are building internally or working with third parties to develop next-generation architecture for current and new modeling platforms. The goal is to make data, computing and approved applications accessible and scalable across the enterprise, with a streamlined approach to development, validation and support.

Talent: It may seem counterintuitive, but putting humans at the center of GenAI strategies will help insurers drive value from their GenAI investments. Diverse teams of product managers, data scientists, data engineers, application developers and business analysts must work together with risk managers and internal auditors to realize business value, and also ensure that necessary oversight is in place.

The bottom line

Ensuring the responsible use of AI through effective risk management

The power of GenAI and related technologies is, despite the many and potentially severe risks they present, simply too great for insurers to ignore. To take advantage of the possibilities, senior leaders must develop bold and creative adoption strategies and plans to drive breakthrough innovation.

A strong risk-based approach to adoption, with cross-functional governance, and ensuring that the right talent is in the right role, is critical to driving the outcomes and the ROI insurers are looking for.

Scroll to Top