Free Preview

This is a preview of some of our exclusive, member only content. If you enjoy this article, please consider becoming a member.

I founded Monitaur in 2019 to enable confidence and trust in AI through governance and assurance.

Executive Summary

Risk mitigations enable insurance, and insurance enables the acceleration of innovation, writes Monitaur CEO Anthony Habayeb, offering his perspective on how insurers can be heroes in the quest to understand and tame AI risks.
“Insurance should be the industry that leads all industries on demonstration of responsible and ethical AI governance,” he writes, noting that the measures insurers take internally aren’t just good business practices. They also position carriers to better evaluate—and backstop—the use of AI by other companies.

With my partners, the company was founded with the belief that by enabling governance, we could accelerate the positive potential of AI to make our lives better. Personally, I was inspired by the idea that AI would fundamentally change the fight against cancer, and I wanted to be a part of that. I still believe AI will improve our research, our drugs, our patient care and our treatment of cancer. But during our company’s journey, we found the insurance industry. And the more time we spend here, the more I believe AI’s future success flows through insurance.

The average person doesn’t realize how consequential and impactful insurance is on our everyday lives. How did that highway get built, that office building get erected, that new medical device get launched, that plane take flight? None of those happen without insurance. From what started as ship merchants pooling dollars to protect each other’s fleet and cargo, insurance is the safety net and enabler of every major project, innovation and industrial revolution. Insurance is what catches us and props us up as individuals during some of our lowest moments—when a family member passes, when a car accident happens, when our house is broken into, when hurricanes devastate communities.

“Because he’s the hero Gotham deserves, but not the one it needs right now. So, we’ll hunt him. Because he can take it. Because he’s not our hero. He’s a silent guardian, a watchful protector. A dark knight.”

(From the 2008 movie, “The Dark Knight”)

Insurance might be the single most important catalyst of our emerging AI economy, and we should embrace this responsibility.

There’s a lot of talk about the potential for AI to transform underwriting and claims, to bring whole new product types to the insurance industry, to transform the relationship between insurers and insured. But what if the conversation we should be having is how can insurance become the economic catalyst for safe, transparent and accountable AI?

Several months ago, I was interviewed for an article about the potential market of AI Insurance. I loved the conversation and thought exercise, but there is so much more to the story and the opportunity of the insurance industry’s interest in AI insurance. The concept of insuring AI might be the single most influential catalyst for accelerated AI, and more importantly, responsible, ethical and governed AI.

Expecting the Unexpected

At some point in the not-so-distant future, every single company will have some AI in their business (our new economy), and they will need some protection from the eventuality that something unexpected will happen with their AI. AI is fickle, it is built by varying degrees of human competency, and it is going to be introduced to environments unlike its training environment. We all need to expect the unexpected.

“Our AI insurance underwriting math would include assessments of scale, usage, impact, data quality, development quality, organizational quality, and ongoing proof of actual performance and impact.”

But moving forward in new economies with known risk is a solved problem, thanks to insurance.

Cars and planes crash. We still drive and fly.

Medical devices fail. We still use them.

People have negative reactions to medication. We still take them.

More people entered the early days of merchant fleets only because of the availability of insurance. Insurance enters markets to support and fuel innovation when 1) there is a societal excitement about the benefits; 2) there are known but accepted risks; and 3) there are viable ways to evaluate and reasonably measure or reduce the risk.

Through that lens, ChatGPT has caused a chasm crossing of societal excitement about AI in parallel to our increasing awareness and acknowledgement of the risks; however, we are lagging in our ability—and unfortunately, at times, our motivation—to reasonably reduce and enable evaluation of the risks.

Enter Batman—aka, insurance. Not regulations, not standards, not good intentions…good old-fashioned insurance. The industry whose fundamental societal responsibility is to see risk, evaluate it, distribute it and keep it from limiting forward momentum. Don’t get me wrong, regulations and standards are hugely important and accelerating almost daily, but they are “sticks” not “carrots.”

The strategic and financial value of insurance to companies is huge; however, AI insurance can’t really happen without Point No. 3 above. We are increasingly aware of the risks, but AI insurance requires the ability to examine what steps a company has taken to reduce the likelihood or scope of those risks if or when they happen. Our AI insurance underwriting math would include assessments of scale, usage, impact, data quality, development quality, organizational quality, and ongoing proof of actual performance and impact.

Here is the amazing, awesome, fantastic alignment of what our future AI economy needs from insurance with the job every insurance company should be doing today: Everything you would want to know to offer AI insurance in the future can be observable and measurable inside your business right now!

“Our AI insurance underwriting math would include assessments of scale, usage, impact, data quality, development quality, organizational quality, and ongoing proof of actual performance and impact.”

Internal investments in AI governance are not just compliance checks or operational improvements to project effectiveness. They are the training and actuarial data every carrier or reinsurer interested in offering AI insurance in the future needs to learn about AI risks and risk mitigation.

I could get on a soapbox about the business value of investing in AI governance. (Feel free to follow me on LinkedIn or search for some of my other media contributions.) The more I think about this benefit, the clearer it becomes: For our biggest and most impactful property, casualty, specialty and reinsurance companies, this is huge.

You are a carrier using or planning to use AI to automate claims, underwriting or pricing. You want your company protected from claims and liabilities, right? You want to be a good corporate citizen and protect your consumers, right? You want to stay ahead of competition with AI, right? You would love to have insurance protecting your eventual loss from some AI loss or impact, right?

All of this and your future readiness to participate in the future AI economy are enabled by better model governance:

  • Comprehensive risk management program
  • Objective reviews and distribution of responsibilities
  • Strong data quality management and validations
  • Proof of data privacy and permission management
  • Ongoing monitoring and validations of performance and impact
  • Readiness for examinations or audits

Risk mitigations enable insurance, and insurance enables the acceleration of innovation. The word insurance is defined as a thing providing protection against a possible eventuality, and the industry is the guarantee of payment against the loss (eventuality) in return for payments of premiums.

“Seatbelts in cars, guardrails on highways, drug trials before distribution—they are the foundation of insurance. Model governance is the same for AI.”

Seatbelts in cars, guardrails on highways, drug trials before distribution—they are the foundation of insurance. Model governance is the same for AI.

Alright, so let’s say you buy into this vision and belief. What should insurance carriers do next to be the partner and protection our future AI economy needs?

The answer is actually pretty simple. Act first. Insurance should be the industry that leads all industries on demonstration of responsible and ethical AI governance. Every insurance company should invest in building better systems—not just because it is good business but because doing so positions it to better evaluate and backstop its use by other companies.

Insurance is the hero AI needs right now.