Cyber criminals are increasingly using artificial intelligence tools to carry out attacks, and it’s becoming harder for insurers to keep up, said Vishaal ‘V8’ Hariprasad, CEO and co-founder of Resilience, at Carrier Management’s 2024 InsurTech Summit.

“I think it’s just going to be a lot of speed and scale with very little room for error on the defensive side,” he said. “The attackers are already successful, and this only allows them to scale and increase their effectiveness and their efficiency.”

This is in part because the use of AI tools is making it easier to be a cyber attacker, leaving insurers and their clients with little margin for error when detecting or responding to threats, he said.

“We run hacker salons for our clients and broker partners at Resilience, and we just want to educate folks on just how easy it is to be an offender or an attacker,” he said. “In these salons, we show how easy it is to use ChatGPT — free tools — to generate well-crafted spear phishing emails that will pull in specific pieces of personal information about a target individual that’s just available on their LinkedIn or publicly available data on their website.”

Not only are cyber attackers using free generative AI tools that already exist, he added, but in some cases, they’re creating their own GPT models that lack the security guardrails of commercial GPT platforms.

“They can use them for the creation of misinformation, disinformation, very well-crafted, targeted spear phishing emails, and targeted malware at a rapid pace,” he said. “You know, the things that we used to see — the easy-to-spot spear phishing emails with common spelling errors or grammar errors — it’s going to be a lot tougher to be able to see that with the human eye.”

The silver lining is that AI isn’t only assisting cyber criminals, it can assist in defense efforts as well. But for many businesses and their insurers, it can be daunting to figure out where to start, Hariprasad said.

“Probably the most important thing is don’t do it all yourself. It’s impossible to invest in every single device that’s out there and configure it and keep it up and running. Not worth the time,” he said. “I think most important is articulating and understanding what are the key items for your business, whether that’s the crown jewels of data or dependencies on business operations.”

Once companies define these key areas of concern, they can identify the right partners in managed security service providers, managed IT providers or cloud providers to take on the risk.

“I would outsource security to those vendors,” Hariprasad said. “Make sure you understand where the data is, where the operations and processing are, and then seek the best opportunity to partner with outside providers to maintain that security for you to fight, follow or get that extra set of eyes as you mature your security program.”

He added the caveat that third-party vendors can be vectors for attack as well.

“Before you enter into a supply agreement or any type of vendor management, cybersecurity should absolutely be one of the minimum questions or requirements for that relationship that companies should define, or at least ensure that their vendors maintain the same level of cyber hygiene that the parent company does,” he said. “I would only partner with vendors that maintain the right level of compliance.”

It’s important to determine whether third-party vendors adhere to standards of cyber certification and have cyber insurance for third-party liability, he said. And while AI tools can assist in the cyber defense process, Hariprasad cautioned that it’s important to keep the human in the loop.

“I think we will see moving forward that AI and these large language models will refine and be trained to better contextualize and speed up the identification of priority alerts for analysts so they can respond to the most important alerts faster,” he said. “The key point I’ll have everybody know, though, is that these tools will never fully automate a skilled analyst. We will always need a tailored, trained security analyst that can interpret these signals.”

This means that human employees will need to be trained to handle things like phishing attacks, fraud and more.

“The human brain needs to be the last line of defense here,” he said. “So, I think companies need to really invest in not just the technology to defend against improved phishing techniques and disinformation but make sure that their employees — and especially those with the keys to the kingdom — are trained in spotting these phishing emails. We cannot rely on just technology to get around this.”

The full conversation is available for a limited time for free on demand