Artificial intelligence is evolving at a rapid speed, and insurers are finding themselves in the driver’s seat of this transformation, according to experts who spoke during Carrier Management’s 2025 InsurTech Summit about navigating compliance issues.

“The main thing is insurers have to emerge as the leaders, the masters of AI, because that’s going to be a major determinant of their success in the future,” said Scott Seaman, partner at Hinshaw. “I know it’s a little bit strange for an insurance lawyer to say this, but you really have to lead with the AI.”

It may seem counterintuitive during a compliance discussion when the advice can sometimes lean toward exercising caution or scaling back the use of technology to avoid legal or regulatory challenges. However, Seaman said reliance on AI technology isn’t a problem if the right people are leading the charge from the beginning.

“If the AI is good, it’s almost always going to be possible to make it compliant,” he said. “If you can’t achieve compliance, it usually means that the AI is flawed in some respects.”

A People-First Approach

This starts with the people who are developing, training and working with the AI tools.

“Having the right people in place is almost 90 percent of eliminating problems and ensuring compliance, and then robust governance and risk management controls and monitoring and audits all come into play,” he said. “But the key is for companies to have control over the AI and not be controlled by it.”

“The key is for companies to have control over the AI and not be controlled by it.”

Scott Seaman, Hinshaw

Marcus Daley, technical co-founder at NeuralMetrics, agreed that AI compliance doesn’t necessarily start with the technology itself, but with the people behind it.

“The people who create it and then manage it are really the ones that are steering it, and that’s what is going to define whether or not it’s going to meet those objectives or not,” he said.

Ryan Griffin, partner, financial lines–cyber, at McGill and Partners, said increased use of AI will inevitably lead to fear of job loss in insurance, but he doesn’t think that will be the case.

“I think what we’re going to find is that [AI] is going to actually augment the people who are doing these difficult jobs,” he said, which is an important process as insurers not only provide protection for clients using AI technologies, but are using these technologies themselves.

“It’s kind of funny. In the world that I live in where I work with a lot of insurers who try to underwrite and offer cyber insurance, oftentimes, they’re the victim of cyber attacks or some of their own data privacy policies have been challenged or they’ve been scrutinized by regulators.”

“So it is a tough thing,” he said. “It’s somewhat the pot calling the kettle black. You’re offering this insurance to third parties, and then you yourself have your own issues in that space. So I think [insurers are] just going to have to continually evolve to keep up with new technology.”

This means understanding data mapping tools and third-party vendor risk to manage the risks that come with embracing new tech.

“Underwriting is becoming far more reliant on external data feeds and tools, and because of the things that our insurance companies are able to ingest from the outside, you have to be very mindful of where that data comes from,” Griffin said.

‘Practice What You Preach’

Griffin added that most importantly, insurers need to practice what they preach in terms of cybersecurity and data privacy.

“It’s somewhat the pot calling the kettle black. You’re offering this insurance to third parties, and then you yourself have your own issues in that space.”

Ryan Griffin, McGill and Partners

“The insurance community has done a great job in managing privacy and cybersecurity risk for the general business community via insurance solutions,” he said. “Go to those resources and then apply those inward and underwrite yourself. I think that’s really important. There are great resources within the legal community, within the technology community, that you can leverage to make sure that you are as mature as you can be while you try to be compliant. You need to be beyond compliant when we’re talking about advanced technology, and there are great resources sitting within our own insurance community that you can leverage.”

When building a compliance framework around AI, insurers should make sure that the technology’s decision-making process is fair and that accountability is in place to trace how AI systems arrive at the decisions they’re making, Seaman said.

“And you want to make sure the damn thing works, so you have to test it, and make sure it’s compliant, it’s accurate, it’s nondiscriminatory,” he said. “You have to repeat the testing to make sure it’s consistent, make sure that it’s not hallucinating or making things up. Monitor compliance and document things. If you do all that, I’d say you have a pretty good framework.”

Griffin emphasized the importance of building a solid compliance framework as regulatory considerations and oversight in the AI space are so new and rapidly evolving.

“You really don’t see the regulatory enforcement until something very egregious happens,” he said. “You don’t want to be the first one to get in trouble.”

Letting Go of Fear and Getting Started

However, Daley added that this shouldn’t scare insurers from getting started with embracing these technologies in the first place.

“Just get started,” he said. “Getting started does not mean you need to deploy it into production. It just means that you need to start working with it. But I think until you actually get started with it and work with it, there are always going to be reasons not to.”

The challenge with giving into this fear, he said, is that AI tools are advancing so quickly that those who are jumping in will gain a large share of the market.

“Getting started does not mean you need to deploy it into production. It just means that you need to start working with it. But I think until you actually get started with it and work with it, there are always going to be reasons not to.”

Marcus Daley, NeuralMetrics

“As they move forward and then are deploying it, you’re going to find that there’s perhaps less of a need for so many providers. Because if they get particularly good at it and the AI is helping them with the decision making, then their decision making is going to be so efficient that it’ll box out a lot of other players,” he said. “And I do think that is a significant risk. And I don’t think that’s necessarily a bad thing. I don’t think you should prevent that. But it does, to me, mean that for everybody, small or large, just get involved.”

If insurers are able to embrace this technology and navigate compliance issues, they can bring a lot of value, Seaman said.

“Insurers are able to use AI and InsurTech to make the lives of the people and the companies they insure more simple, more predictable and less risky. They’re providing important value, and they’re going to be successful,” he said.

“One thing’s for sure, AI will not properly regulate itself. So insurers need to have human beings involved at all appropriate decision points.”