Free Preview

This is a preview of some of our exclusive, member only content. If you enjoy this article, please consider becoming a member.

The line between real and fake is becoming increasingly difficult to discern.

Repair estimates, medical records, invoices and damage photos can be created with inexpensive, at-home AI setups and open-source models, allowing bad actors to develop compelling images and documents without any identifiable watermark or digital fingerprint.

Recent cases of AI-generated voice impersonation used to authorize fraudulent wire transfers, along with the rise of synthetic identity and falsified medical documentation, underscore a broader trend: fraud is no longer constrained by effort or scale. What once required organized networks can now be executed by individuals with standard tools.

For insurers, this marks a dangerous turning point in a long-running competition with fraudsters, one in which the advantage is shifting faster than many organizations realize.

Supplier Fraud Enables Insider Fraud — and Vice Versa

Imagine that you are a farm insurer in rural Iowa.

Unlike standard car repairs, repairing farm equipment is much more specific to the kind of equipment and the specific kind of repair. Also, because of the bespoke nature of much of this business, claims handlers need to have deeper knowledge of what each specific repair shop specializes in putting them in more frequent contact with specific repairers.

Using this context, imagine a client needs to repair a combine harvester, which is a $500,000 piece of equipment.

It’s easier for a bad actor at a repair shop to pad an invoice by $5,000 against a $500,000 piece of equipment than it is to pad an invoice $500 against a $50,000 car.

This leaves plenty of money to kick back to a claims handler, who might have personal connections with the owner of the repair shop.

Supplier fraud enables insider fraud, and vice versa.

Fraud Extends Beyond Claims

Much of the industry’s attention naturally focuses on claims fraud. Claims represent a direct financial exposure, high transaction volume and increasing automation. Other significant types of fraud may receive less attention but are just as prevalent and damaging, particularly supplier fraud and insider-enabled schemes. Preferential referrals, kickback arrangements and long-running supplier relationships designed to extract value over time don’t always surface in traditional fraud detection models. These schemes often operate below the threshold of standard claims analytics, making them harder to detect and more damaging over the long term.

Data + Generative AI: Powerful Fraud Weapons

At the center of effective fraud defense is data. Over the last two decades, machine learning has proven to be a powerful tool in identifying suspicious patterns, especially in personal lines, where high volumes of clean, standardized data enable models to learn from millions of historical claims.

Industry tools that pool data across insurers, such as claims and underwriting exchanges or network-based detection platforms, allow carriers to uncover behaviors that would remain invisible within a single organization. These systems use machine learning to identify subtle patterns such as connections across claims, repeated events or unusual relationships between parties that signal fraudulent activity. Much like a chess engine that learns optimal strategies by playing millions of games, fraud detection models learn from enormous datasets to spot behaviors that even seasoned investigators might miss.

But data-driven detection has limits. These systems struggle when data quality is poor, volumes are low or pooling data is impractical. Specialty risks, bespoke commercial policies and internal fraud scenarios often fall outside the reach of traditional machine learning models. Fraudsters understand these limitations and increasingly exploit them.

That’s where generative AI begins to shift the equation, not by replacing existing fraud models but by expanding the amount and types of data those models can learn from, especially in areas where structured data has historically been scarce. This technology excels at extracting structured data from unstructured sources, such as handwritten documents, scanned forms and lengthy medical reports.

This enables insurers to detect suspicious inconsistencies buried deep within massive document sets. A red flag hidden on page 377 of a 500-page medical report is no longer invisible. Generative AI can surface anomalies, summarize findings and guide investigators toward the most relevant details. This allows human experts to focus where judgment matters most.

Staying Ahead of Fraud

Even with larger and more sophisticated datasets, fraud remains a moving target. Insurers will always be playing catch-up. Fraudsters are relentless, adaptive and highly motivated. But carriers can tilt the odds in their favor by adopting a layered defense strategy grounded in data, technology and human vigilance:

Turn operations into sensors. From claims workflows and customer interactions to internal systems and employee behavior, insurers must move beyond narrow fraud models and adopt comprehensive approaches to collecting, instrumenting and analyzing telemetry across their entire organizations. Collecting anonymized data from contact centers, underwriting teams and claims operations is especially valuable, as it allows patterns to surface that would otherwise remain invisible.

Retesting is a core control. Detection capabilities evolve, but only if the underlying data is preserved. A document that clears fraud controls today may become an obvious red flag tomorrow. Without the ability to retrospectively analyze historical data, insurers lose the opportunity to identify emerging patterns and strengthen future defenses. Continuous testing, retesting and forensic analysis are essential in an arms race that never stands still.

Align defenses to the real risks. Not all fraud looks the same. The tactics of a bad actor seeking a quick payout on a fictitious auto claim differ sharply from those of a bad actor behind a long-running supplier or insider fraud scheme. Insurers need a clear understanding of where they are most exposed to deploy the proper controls, analytics and oversight.

For example, if someone is looking for a quick payday, they can stage a slam-on at a busy intersection with a car full of passengers and all make whiplash claims, in the hopes that the insurance company will pay rather than fight. This data is both high-volume and standardized, which gives an opportunity for the industry to detect organized fraud networks who stage incidents of this type.

But the less data there is, the harder it is to detect network effects, which makes low-volume, high-value areas (e.g., prestige vehicles, light commercial, farm, etc.) potential sources for insider fraud. To help uncover suspicious activity for schemes that don’t have as much data around them, detection strategies can include: behavioral, such as unexplained sources of wealth; operational, including employees refusing to take vacations or accessing systems at odd hours; and financial, such as being invoiced by the same supplier repeatedly in amounts that are below that which require senior oversight.

AI creates new doors for fraud. Fraud risk doesn’t stem solely from how bad actors use AI; it can also come from the ways insurers deploy it. AI-powered tools such as chatbots may be tested to avoid offensive or inappropriate responses yet can be vulnerable to manipulation that exposes sensitive data or enables fraudulent transactions. Robust adversarial testing is required to ensure sophisticated attackers can’t exploit systems.

Insure new risks. Insurers should practice the same discipline they advise their clients to follow by ensuring their own cyber coverage reflects today’s threat environment. Cyber insurance can help mitigate losses from data breaches, system manipulation, regulatory scrutiny and AI-enabled fraud incidents. Equally important is understanding policy exclusions and limitations, as many cyber policies contain ambiguous or evolving language around AI-related risks.

Don’t overlook the human element. Despite technological advances, people remain the most exploited vulnerability. Social engineering attacks, phishing schemes and confidence-based manipulation continue to succeed because they target human psychology rather than technical flaws. Training employees to recognize suspicious behaviors, resist social engineering and challenge unusual requests is among the most effective defenses insurers can deploy.

In a world where machines can convincingly fake reality, the combination of high-quality data, intelligent automation, and well-trained people offers the strongest path forward. The cat-and-mouse game may never end, but insurers that invest in data, intelligence, and people will define the rules of engagement.