How could generative artificial intelligence affect litigation and insurance coverage as it’s increasingly used in the workplace?

Experts at Carrier Management’s recent InsurTech Summit discussed the protection measures insurers and businesses alike will need to have in place as they seek to implement this technology.

AI for Employment

One big challenge of using AI is the discrimination risk it presents in the employment process, “especially in HR decision-making,” said Holly Goodman, board-certified labor and employment attorney and shareholder at law firm Gunster.

She noted that many businesses are moving toward using AI resume screening software in their employment-related processes. In that situation, the software may not target protected classes of workers, but it could “inadvertently screen out certain individuals based on their background, or their history, or based on, for instance, models as to who has been successful at the organization.”

Companies are also tapping AI chatbot software for hiring and workflow systems and using the AI to schedule and screen job applicants based on initial responses. Businesses are using generative AI to develop job descriptions and policies, too, she said.

Another concern is that the AI is going to base these job descriptions “off of predictions as to what those words should look like coming together,” Goodman said. “So, what you may get might look like a job description, it might have the pattern of a job description, and it might even use some really great buzzwords in your industry. But it might not be tailored enough for your business to really address the things that you need.”

Goodman is also seeing employers use AI to identify “job fit or culture fit” in data analysis algorithms. However, unintentional discrimination concerns could stem from this as well.

“Because now we’re looking at what our workforce currently looks like,” she explained. “Who is successful in our workforce? And the AI may not be looking for protected demographics of the workforce.”

If the most successful people in an organization are young men named Kevin, for example, “then the AI might think that’s what it takes to be a successful worker in your workforce and to be a good culture fit, or a good fit for the job,” Goodman said.

AI Lawsuits Ahead

As AI is implemented more frequently in employment processes, Goodman foresees “significantly more” AI litigation on the horizon. Data privacy risks arise from using data with personal or identifiable information, she said, as well as in the employment sphere.

“In the employment realm, I think we are expecting to see more cases where AI had a part in an employment decision that is being challenged as being discriminatory,” Goodman shared. “Either based on an intentional discrimination theory … or, more likely, based on an adverse impact.”

Goodman defined “adverse impact” as unintentional discrimination, “where you can see based on a pattern that some neutral policy that was never intended to be discriminatory has an impact on one or more protected classes.”

She said that businesses should understand that they will be responsible for the data that they use and how they use it.

AI Model Error Risk

Michael Berger, head of Munich Re’s Insure AI team, said his team places an emphasis on the use of data within risk assessment to underwrite artificial intelligence model error risk — or, “the risk that the output of a model is wrong in some sense.”

It’s an important risk, he said, because decisions based on faulty AI modeling can lead to financial losses and create liabilities.

“In order to underwrite this risk, we need to quantify the risk,” Berger said, which means assigning a value to the severity and frequency of the losses. “One of the big lessons that we’ve learned … is that it is possible to quantify the model error risk of even a black box model. So, I think that’s quite good news, because this also means, then, [that] risks from model errors are essentially insurable, even from black box models.”

Silent AI

These model errors or hallucination-related risks, in which the AI produces false or misleading information, could be covered, in part, by professional indemnity insurance, depending on the policy language, Berger said.

This is because of silent AI, a term similar to silent cyber, in which an insurer unknowingly has exposure to the risk as it’s unintentionally wrapped into a policy.

“Silent AI is similar to the term silent cyber,” Berger said. “We need to note that insurers might already be exposed [to] and cover certain AI risks in traditional policies without even the traditional policy stating that those AI risk scenarios are explicitly covered.”

Another example is AI discrimination risk, which could be covered by employment practice and liability insurance, he said. The risk of copyright infringement via generative AI model output could be covered by media liability insurance as well.

“I think the first thing to do is that both insurers and insureds really need to become aware of the existing AI exposure, which might or which might not be covered in the traditional policies,” Berger said. “From an insurer’s perspective, they need then to ask themselves whether this exposure is correctly underwritten and priced” and then determine if they want to make the coverage explicit and affirmative in traditional policies or move forward with exclusions and endorsements, he said.

Go Deeper

Watch a replay of this session and all of the panels hosted during Carrier Management’s 2024 InsurTech Summit: AI for Everything.