Artificial intelligence and its uses are a hot topic these days. Some benefits of AI include efficiency in time and resources, but what happens when an AI-related error or failure occurs?

Liability insurance relating to damages caused by an AI system or software, according to the National Alliance for Insurance Education & Research, may need to step in to address AI-related losses.

The industry alliance poses an interesting question relating to a loss caused, in part, by AI. “Who is liable – the AI system’s creators, operators or users?”

Key considerations:

1) What parties are potentially liable?

Much like other losses where there are multiple parties (think of a building loss where there are multiple contractors on site), it will be important to identify all of the parties that may be at fault for the damage caused in an AI-related loss, including the creator(s), designer(s), installers and maintenance providers.

As noted by the alliance, “liability may rest with the developer or manufacturer of the software, the business or individual who operates it, or the end-user who interacts with it.”

2) Evaluate how the potential damage occurred.

“AI systems can cause harm in various ways, such as property damage, personal injury, defamation and invasion of privacy. It is crucial to consider all possible scenarios when assessing the potential risk and determining coverage,” the alliance stated.

Complicating liability assessment will be how the AI system interacts with other technologies and third-party vendors, it added.

3) Staying up to date on regulations is crucial.

“For example, the European Union’s General Data Protection Regulation (GDPR) and California’s Consumer Privacy Act (CCPA) both have provisions that require organizations to be transparent about how they collect and use consumer data – including data collected through AI,” the alliance noted.

4) Understanding “black box” issues.

According to the National Alliance for Insurance Education & Research, the complex neural connections that make up AI can be difficult to understand, emphasizing the need for insurance risk professionals to work together with AI developers to enhance transparency.

5) Recognize traditional policies may not be enough.

A one-size-fits-all policy will not work for this highly changeable risk, the alliance added. Specialized policies, tailored to AI technology providers and users, will arise to address “issues such as cybersecurity breaches, intellectual property disputes and product liability claims related to AI system failures.”

According to the alliance, Chubb, AXA XL, Zurich and Alliance currently offer AI liability insurance policies.