The rapid adoption of artificial intelligence by organizations across all industries is changing the commercial risk landscape; it’s time to consider making AI its own risk classification, according to a new report from Lockton Re, in collaboration with Lockton International and Armilla AI.

“There are no sectors of the economy that are insulated from the potential impact of AI. As an industry we need to prepare for how these rapidly evolving risks are underwritten across commercial insurance and what emerging claims patterns will look like,” said Oliver Brew, co-author of the report and head of Cyber Centre of Excellence, Lockton Re, in a statement.

“The underwriting of AI risk needs to consider the novel perils created,” added Baiju Devani, co-author of the study and CTO & Cofounder, Armilla AI. Devani noted “a growing gap between what insurers intend to cover and what they actually cover.”

The report maps AI-related exposures across key commercial classes, highlighting areas where coverage may be silent, fragmented or misaligned.

Cyber

AI is being leveraged to amplify and accelerate the impact of cyber attacks through high-fidelity phishing and deepfake technology.

Some cyber insurers are explicitly covering specific AI risks where the underlying trigger is a traditional cyber event, such as a data breach, security failure or ransomware attack that impacts AI infrastructure. The report said these AI‑focused endorsements indicate a shift toward limited named‑peril protection for potential cybersecurity harms arising from AI tools.

Another type of endorsement is emerging to address operational AI risks such as unauthorized access to LLM environments, including reimbursement for model redevelopment costs after an event.

Errors and Omissions (E&O)

AI models fundamentally shift the nature of technology professional liability risk, the report said, noting that traditional technology E&O policies were designed for deterministic software product and service failures, such as bugs, outages, configuration mistakes; missed service-level agreements; and breaches of contractual obligations. The probabilistic nature of AI makes it hard to predict and can create new potential claim scenarios that carriers need to address.

The report noted a trend toward selective, tightly defined endorsements, citing algorithmic decision errors, hallucinations, misguidance and data‑training issues as examples of named causes of loss. Clauses addressing the “AI services wrongful act” and “AI products wrongful act” have been added to some policies, explicitly extending insurance to products and services being developed using new AI technology.

The report warned that many of these new endorsements may be too narrow, leaving gaps when an incident falls outside a defined peril. This type of insurance also only serves a particular group—namely developers of AI solutions, rather than the companies using AI models.

Casualty

Commercial general liability (CGL) insurers do not currently model, underwrite or price AI risks, so there is likely a growing gap between what insurers intend to cover and what they actually cover based on the policy language, the report said, stressing the need for specific exclusions, similar to cyber perils.

One key factor in the interpretation of these clauses is how AI is defined, since advanced generative AI is able to produce its own synthetic content, rather than only interpret inputs.

Directors and Officers (D&O)

As organizations embed AI into their long-term strategies and operations, D&O exposure is rising when it comes to governance oversight and misrepresentation.

  • Governance issues arise out of allegations that boards have failed to identify, mitigate or disclose material AI risks, such as model bias, safety, reliability or vendor dependence.
  • Misrepresentation comes into play when organizations overstate the pace of AI development to encourage investment or elevate share price—sometimes known as “AI washing.”

D&O policies still hinge on traditional definitions of wrongful acts, so they do not guarantee coverage for AI‑specific failures. Standard conduct and intentional acts exclusions also apply (e.g., fraudulent statements about AI capabilities).

Employment Practices Liability (EPL)

The expanding use of AI in hiring amplifies the risk of bias and discrimination, since the AI models may have been trained on prejudicial data.

Many policies are silent on the use of AI models and also reference “insured persons” or “natural persons” as the policyholder, which may be a limiting factor for covering output from AI models.

Affirmative Coverage

A new category of insurance is emerging to address ambiguities and potential gaps in traditional commercial insurance cover, particularly where probabilistic model behavior is assessed through legacy negligence or “wrongful act” constructs.

Affirmative coverage typically has an “all-risk” basis, specifically designed to cover liability arising out of AI model error, including scenarios that may not involve a cyber event or malicious actor.

One approach is to assess the “target model metric” as part of the underwriting process. Each model is underwritten on its individual merits, based on factors such as the industry, context of the outputs, any underlying foundation model, the version deployed and the use case. This approach allows for bespoke pricing and enables clearer articulation of coverage intent.

Examples of this type of affirmative AI coverage focus on model-specific risk assessment and clearly defined triggers for AI failure events rather than extending traditional cyber or technology E&O wordings through endorsements, the report said.

Systemic Risk

The report also explored the possibility of systemic risk due to shared AI infrastructure and common foundation models.

“The challenge for the insurance industry is not whether AI will create systemic risk events, but when, and if underwriting practices can keep pace,” said Devani.

Traditional systemic controls are not as effective with AI. When a widely deployed model contains compromised training data or other unintended performance characteristics, failures can occur simultaneously across multiple organizations regardless of geography, industry or individual risk management practices.

Effective underwriting of AI risk requires a fundamentally different approach compared to traditional commercial lines. In addition to focusing on individual policyholder risk management practices, underwriters must evaluate portfolio-level exposure concentration through shared model dependencies, architectural vulnerability to coordinated attacks, and the capacity to detect failures before substantial liability accrues.