Free Preview

This is a preview of some of our exclusive, member only content. If you enjoy this article, please consider becoming a member.

The insurance industry is entering a phase of artificial intelligence (AI) adoption where the primary constraint is no longer capability.

Executive Summary

Insurance industry participants need to be building for change, Finsys CEO Kurt Diederich believes, revealing parallels between the proliferation of InsurTechs delivering AI tools and the growth of Internet companies in the early 2000s—many of which did not survive.

“Carriers that approach AI as a static investment risk are accumulating technical and operational constraints,” he writes, supporting his view that a better approach is to “treat AI as a modular capability” that is part of a “plug-and-play operating model,” allowing carriers to evaluate, implement, and replace components with minimal disruption as conditions evolve.

Here, he also provides tips on designing adaptable systems and testing approaches for AI components.

Most core use cases, including submission intake, underwriting support, claims triage, document processing, and customer service augmentation, have already been developed. Many carriers are not deciding whether AI can be applied but determining which solutions merit implementation and how those decisions should be managed over time.

This distinction matters. The challenge has shifted from innovation to selection and, ultimately, to lifecycle management. Given the pace of change in the InsurTech ecosystem, any given solution may be superseded or displaced within a relatively short time. This dynamic is not without precedent.

In the early 2000s, the rapid proliferation of Internet companies created a similarly crowded and volatile landscape, where only a small percentage of vendors ultimately proved durable. The current AI market is exhibiting comparable characteristics, with a high volume of entrants, uneven differentiation, and ongoing capability leapfrogging.

Carriers that approach AI as a static investment risk are accumulating technical and operational constraints that limit their abilities to adapt as the market consolidates and evolves.

The winning approach is to treat AI as a modular capability within a broader technology ecosystem. This requires a plug-and-play operating model that allows carriers to evaluate, implement, and replace components with minimal disruption as conditions evolve.

Anchoring AI Decisions in Operational Constraints

A recurring issue in AI adoption is the tendency to begin with vendor capabilities rather than internal operational requirements. This leads to the introduction of tools that are technically sound but only marginally impactful.

It’s better to anchor evaluation around clearly defined operational constraints. Carriers generally have well-understood friction points, whether related to underwriting throughput, claims handling efficiency, or variability in submission quality. The role of AI should be assessed in the context of those constraints rather than as a standalone capability.

“In many organizations, systems are tightly connected, with deeply embedded, mutually dependent integrations….Replacing one tool can create ripple effects across the entire environment.”

For example, in high-volume underwriting environments, the issue is often less about having enough data and more about dealing with the inconsistency of that data. When submissions come in a standard format, tools like OCR (optical character recognition) can handle them efficiently. The challenge occurs when they don’t come in a standard way, when underwriters are reviewing inconsistently structured submissions, partially completed applications, or documents pulled from different systems. In those situations, AI helps make sense of the variation, organize the information, and surface the most relevant risks so underwriters can focus their time on higher-value decisions.

This type of targeted application often produces more consistent results than broad, undifferentiated deployment. It also provides a clearer basis for carriers to evaluate the ROI.

Designing Systems That Can Adapt

There is a high level of volatility in today’s AI vendor landscape. New solutions are regularly entering the market, and existing ones are being improved or replaced at a rapid pace. Carriers can assume that some of the tools they implement today will need to be replaced in the near future. The ability to make those changes efficiently is becoming a critical capability.

This has important implications for how systems are designed. In many organizations, systems are tightly connected, with deeply embedded, mutually dependent integrations. While this may work in the short term, it makes change slow and expensive. Replacing one tool can create ripple effects across the entire environment.

A more flexible approach is to design systems so that components can be updated or replaced with minimal disruption. This typically involves using well-defined APIs and keeping data flows clean and easy to manage. It also requires ongoing attention. Over time, even modern systems can become difficult to change if integrations are added without considering long-term flexibility.

It’s not just an issue of whether a system is old or new. What matters more is whether it has been designed and maintained with adaptability in mind.

Related article: How Modern Is ‘Modern Enough’ for Insurance Applications?

Choosing the Right Approach to Testing

Testing is a critical part of AI adoption, but the level of rigor should reflect the level of risk. A/B testing, which is comprehensive but also expensive and time-consuming, is necessary in situations where decisions are difficult to reverse, and errors carry meaningful consequences.

Claims triage is a good example. When AI is used to classify and route claims to identify potential fraud or determine escalation paths, it can directly impact both loss outcomes and operational efficiency. It’s a good example where a controlled approach is necessary. Variables should be limited, and success metrics should be clearly defined in advance.

But this level of rigor is not required for every use case. When the cost of failure is low and changes can be easily reversed, a more flexible, iterative approach is often more effective. This allows carriers to test solutions in a live environment, learn quickly, and adjust without the time and expense of formal A/B testing. This model is particularly well suited to use cases that include human oversight.

Examples include document summarization, call center assistance, and AI-driven data extraction from unstructured submissions. In these scenarios, outputs can be reviewed and validated before moving forward, reducing risk while still improving efficiency.

Evaluation in these cases should focus on operational impact rather than absolute accuracy. Metrics such as time savings, reduced manual effort, and minimal downstream disruption provide a more practical measure of value. This approach enables faster learning, supports broader experimentation, and helps carriers avoid committing to solutions before they have demonstrated real, sustained benefit.

Planning for Scale from the Start

Many carriers today are finding that their AI initiatives are getting stuck in the pilot phase and never fully scale across the organization. It’s become a common challenge across the industry: solutions show promise in controlled environments but fail to translate into broader operational impact.

One of the main reasons is that scaling isn’t considered early enough. Pilots are often designed to prove that something can work, but not what it will take to make it work consistently at an enterprise level. To avoid this, carriers should plan for scale from the beginning. That means thinking beyond the pilot and considering what it will take to support the solution in a production environment.

There are a few key elements to get right. Integrations should be built to work reliably at scale, not just in testing. Governance should be clearly defined, with ownership and accountability in place. And organizations should be prepared for change, since AI often alters workflows and how decisions are made.

Stuck in pilot phase? “One of the main reasons is that scaling isn’t considered early enough….That means thinking beyond the pilot and considering what it will take to support the solution in a production environment.”

Having a dedicated sponsor is especially important. Someone needs to be responsible for tracking results, driving adoption, and ensuring the solution continues to improve over time. Without this level of planning and ownership, even strong pilots can remain limited in scope and fail to deliver meaningful impact across the organization.

Evaluating Technology Partners in a Multi-Integration Environment

The increasing complexity of carrier ecosystems has elevated the importance of technology partners. Modern implementations frequently involve numerous integrations spanning data sources, third-party services, and internal systems.

As a result, a partner’s ability to support integration has become a primary consideration. This includes not only initial implementation but also the ongoing evolution of those integrations as systems and requirements change.

“Someone needs to be responsible for tracking results, driving adoption, and ensuring the solution continues to improve over time.”

Traditional evaluation mechanisms, such as feature-based scorecards, often do not fully capture these capabilities. More informative indicators include customer satisfaction, reference feedback, and the degree to which the partner’s incentives align with carrier objectives.

Domain expertise is also a critical factor. Insurance operations involve a level of complexity that requires experience to navigate effectively. Partners with established industry knowledge are better positioned to anticipate challenges and support long-term success.

The trajectory of AI adoption in insurance suggests that competitive advantage will be determined less by access to technology and more by the ability to manage it over time.

Carriers that adopt a modular, plug-and-play approach, supported by appropriate architecture, governance, and testing discipline, will be better positioned to respond to ongoing market changes. Those that do not may find themselves constrained by earlier decisions that are difficult to reverse. In this environment, the defining capability is not implementation, but adaptability.