Advances in AI, especially Gen AI and Agentic AI, are creating significant opportunities but also sharpening a critical question for insurers: just because a process can be automated, should it be?
Executive Summary
Manuel Rodriguez Vera of Capgemini’s WNS unit offers practical approaches for insurers to embed control, privacy and compliance into AI systems while enabling collaboration and innovation across jurisdictions. Approaches discussed include restricting data and AI workloads to specific geographies and an emerging approach to machine learning known as federated learning, among others.High-impact AI use cases offer the potential to reduce costs and risk; improve productivity and efficiency; and drive revenue across sales, underwriting, claims and servicing. Yet, as insurers accelerate AI adoption, they confront growing risks that go beyond effectiveness to critical areas like cybersecurity, privacy and explainability. In 2025 alone, data breaches affected nearly 300 million individuals, underscoring how quickly trust can erode. At the same time, customer expectations and regulatory scrutiny around data usage and model transparency are intensifying, with increasingly direct questions around how data is used and how models operate.
Taken together, this reinforces a dual imperative for insurers: apply AI thoughtfully and strengthen the data governance foundations that support it, maintaining compliance and upholding customer trust.
Data Sovereignty and Privacy: The New Imperatives for an AI-First World
Data sits at the heart of the insurance industry, underpinning how insurers evaluate risk, make decisions and tailor experiences to individual customers. As insurers look to embed AI more deeply across functions, data sovereignty and privacy are emerging as critical priorities. Failure to address them can expose insurers to significant risks across four key areas: regulatory compliance, operational resilience, customer trust and competitive advantage. These risks also help explain why insurers remain cautious about fully automating decisions.
A closer look at these risks reveals their true impact.
Data privacy and residency requirements are becoming more complex and increasingly fragmented across regions, making compliance more challenging.
In the EU, the GDPR enforces strict controls on personal data processing and automated decision-making, requiring both explainability and a lawful basis for AI-driven outcomes. In the U.S., regulations like the CCPA give consumers rights to access, delete and limit the use of their data, adding further constraints. Meanwhile, countries such as China, India, Singapore and several in the Middle East are strengthening data localization laws, requiring sensitive data—such as health and biometric information—to remain within national borders.
In parallel, frameworks such as the EU Artificial Intelligence Act classify many insurance AI applications as high-risk, requiring clear proof of controlled datasets, rigorous bias testing, full traceability and meaningful human oversight.
AI systems can also introduce significant operational risks when processing confidential data to assess risk, price policies and make decisions. One such risk is training data leakage, where proprietary or confidential information becomes embedded in shared models and may be exposed through downstream use.
Meanwhile, model inversion attacks can reconstruct sensitive inputs, including personal or health data, by analyzing outputs. Cross-client data contamination can also occur, mixing insights across tenants. For insurers, this raises both compliance risks and the risk of exposing catastrophe models, pricing strategies and underwriting approaches—core assets that underpin competitive advantage.
At the same time, large corporate clients are increasingly demanding greater transparency from insurers and brokers, asking critical questions about where their data is stored, how it is used and who can access it.
Ultimately, insurance is fundamentally a trust-based industry. How data is handled directly influences customer loyalty and market position. Without confidence in how AI systems use and protect data, even the most advanced capabilities struggle to deliver value.
Embracing AI-driven, Human-led Data Ecosystems
In our experience working with leading insurers, adopting domain-relevant, future-ready data architectures that balance technological innovation with human judgment is essential to sustained success. The following practical approaches show how insurers can embed control, privacy and compliance into AI systems while enabling collaboration and innovation across jurisdictions.
Sovereign and regional cloud deployments are becoming foundational to global operations. By restricting data and AI workloads to specific geographies, insurers can meet localization and cross-border requirements while maintaining control. Increasingly, enterprises are adopting private and hybrid cloud architectures that span core, edge and far-edge environments, enabling flexibility without sacrificing compliance.
For example, many UK and EU insurers operate EU-only AI environments, alongside separate infrastructures for the U.S. and other regions to meet jurisdictional requirements. This approach ensures compliance, operational autonomy and security across highly distributed, mission-critical environments, with sovereignty extending seamlessly from centralized data centers to remote edge locations.
Private large language model (LLM) rollouts and smaller, domain-specific model implementations are gaining traction, driven by multiple factors: the availability of vast volumes of AI-suitable data; the need to comply with strict regulatory and confidentiality requirements; and rising client expectations for faster, deeper insights. These models are deployed within an organization’s own infrastructure, ensuring sensitive data remains within controlled jurisdictions and is not exposed externally or used for broader model training.
As private LLM infrastructure becomes more accessible and cost-effective, adoption is expected to accelerate further. Leading insurers are advancing responsible AI frameworks to govern these deployments while maintaining human oversight across use cases such as financial modeling support and workflow automation and reporting.
Federated learning, an emerging approach to AI, enables decentralized model training while preserving data privacy. Unlike traditional machine learning, which relies on centralizing data, it allows models to be trained directly where the data resides—on local servers or devices.
For insurers, this means models can be developed collaboratively without sharing raw customer data, enabling access to broader datasets and insights across organizations. By unlocking data collaboration without compromising privacy, federated learning has the potential to significantly improve model accuracy, enhance customer experiences and drive stronger business outcomes.
Data clean rooms—secure environments that enable collaboration between brokers, insurers and reinsurers without exposing sensitive data—are also gaining momentum. The global data clean room for the insurance market hit $1.24 billion in 2024 and is projected to grow at a 23.8% CAGR through 2033. By design, data clean rooms enable organizations to analyze and combine datasets without directly sharing the underlying data. Each collaboration is governed by a data provider who defines what data can be accessed and what analyses are permitted, while all parties retain full ownership and control of their data.
Privacy is further reinforced through privacy-enhancing technologies (PETs), including differential privacy, aggregation techniques and synthetic data generation. Synthetic data is used to train and test models without relying on real customer information. This controlled approach enables use cases such as fraud detection and credit modeling, allowing organizations to unearth cross-party insights while maintaining privacy, compliance and human oversight.
Related article: How to Quench the Insurance Data Drought
Engineering AI for Enterprise Success
As AI gains momentum and regulatory and customer expectations intensify, insurers face a critical challenge: how to operationalize governance and build trust at scale. A pragmatic approach is to integrate AI-centric architectures and explainable systems for routine tasks such as claims validation with human judgment for high-stakes decisions, including fraud and underwriting exceptions.
Building this foundation is complex, and few organizations can do it alone. Collaborating with strategic partners who combine deep domain knowledge with AI expertise can accelerate adoption while reinforcing governance, compliance and scalability.
Insurers that get this right will stand apart, balancing rapid innovation with targeted control, accelerating operations without compromising accountability, and strengthening the trust that underpins every insurance relationship.



Traveling for Business? Don’t Forget to Pack Your Emergency Preparedness Plan
Strong El Nino, Warmer Sea Impacts Atlantic Hurricane Season Forecasts
Deep or Shallow? Previewing the 2026 Soft Market
Are ‘Moderate’ Hurricanes Getting Squeezed Out of the North Atlantic? 




