Free Preview

This is a preview of some of our exclusive, member only content. If you enjoy this article, please consider becoming a member.

There are some algorithms in life that, even with the very best marketing, don’t really matter too much.

For example, if Netflix’s recommendation algorithm points me toward the first episode of a terrible new show when all I needed was a “Frasier” re-run, I’ll be fine. I might have wasted 20 minutes trying out the new show, but I can be back in Seattle without leaving my couch. Annoyed and inconvenienced, sure. But life goes on.

Executive Summary

Matthew Jones, a principal of fintech venture firm Anthemis, proposes the development of a new algorithmic insurance line of business—a modern-day hybrid of cyber, product liability, and errors and omissions insurance. After outlining what it would cover, he proposes a framework for assessing AI risk to bring the concept to life and suggests the possibility of a rating agency that also might supply algorithm ratings as an enabler for the insurance industry.

In his view, a good starting point could be an underwriting model that considers People, Process, Technology and Data. For each component, he offers relevant questions, such as “Is a Ph.D better than a master of Science degree” when evaluating the quality of people building AI models.

It’s time for insurers to figure out if they want to cover these risks before they find out that they were covering them all along, he concludes.

Another inhabitant of Seattle, e-commerce giant Amazon, launched its Amazon Web Services (AWS) platform back in 2006. Amazon certainly didn’t invent cloud computing, but without question the company played a key role in propelling it forward and making cloud-based IT infrastructure far more accessible from both ease-of-access and cost perspectives. No longer needing to invest thousands of dollars on servers prior to a company’s launch, cloud computing has led to an explosion of startups able to serve customers from anywhere. More importantly, however, these companies, many of which have grown well beyond the startup stage, are now able to deploy sophisticated applications that use artificial intelligence (AI)—classifying, deciding, suggesting—just like Netflix’s recommendation algorithm.

Even during a period of not being able to leave the house, life isn’t just about TV and movie recommendations. As cloud computing has grown in popularity and as technologists have become even better at using the technology presented to them, AI is almost everywhere. At home and at work, from robot vacuum cleaners to underwriting decisions, our life is increasingly shaped—and often made much better—by AI. AI helps to speed up very complex decisions and in many cases can make better decisions than people. But just like manual vacuum cleaners and human underwriters, things can catch fire or someone can make the wrong decision. There can be consequences—economic or otherwise. That’s where the traditional insurance policy steps in. So far, so good.

In recent years, the insurance industry has facilitated the emergence of a new line of business, cyber insurance, separating it from existing property and casualty lines. It has become one of the fastest-growing and most exciting lines of business, attracting talent from outside the insurance industry. Cyber insurance—typically covering losses resulting from a cyber attack on a company—is closely related to AI insurance. But it is different.

So, amid the rapid deployment of powerful technology that enables hardware to operate independently and software to make life-changing decisions, the question I’m asking is: Who is insuring the algorithms? When things go wrong and at a time when we might need it most, where is AI insurance? What would it cover, and what does it mean for the industry?

In short, I think we are looking at the development of a new line of business, best thought of as a modern-day hybrid of cyber, product liability, and errors and omissions insurance. (Article continues below)

What Is AI Insurance, and What Will It Cover?

In some lines of business, the peril being insured is wonderfully obvious: If the ground shakes or the wind blows, insurers pay and the customer can rebuild. AI’s machine learning models essentially do one thing: They make decisions. As noted by John Villasenor, a Fellow at Brookings, “given the volume of products and services that will incorporate AI, the laws of statistics ensure that—even if AI does the right thing nearly all the time—there will be instances where it fails.” But what is failure in the context of AI insurance?

In their early 2020 HBR article, Ram Kumar and Frank Nagle define the two different kinds of failure:

  • Intentional failure. This can be thought of as “active adversary attempts to subvert” the AI system; this might be covered by cyber insurance, but cyber insurance is often concerned with data breaches and may not always cover third-party damage caused by your software.
  • Unintentional failure. This is where AI systems “fail by their own accord” and might produce a “formally correct—but practically unsafe—outcome.” If humans made such mistakes, the consequences would probably be covered by existing liability policies.

“The main trigger for a global conversation on the topic of AI insurance will be an AI-triggered event that causes widespread losses, just as has happened with pandemic insurance.

AI failures generally occur at one (or both) of two points—during the learning phase or the performance phase, according to AI researcher Roman Yampolskiy. (Source: “Artificial Intelligence Safety and Cybersecurity: a Timeline of AI Failures,” paper by Roman V. Yampolskiy, University of Louisville and M. S. Spellchecker, Microsoft Corp.)

Wherever in the process a mistake occurred, and whether the decisions come in the form of a classification or a recommendation, at the core of AI insurance will be protection in the event that a machine makes the wrong decision.

AI has proliferated across many different industries, leading to three main consequences of poor decision-making by AI. Some scenarios include:

It may be argued that many of these risks—property damage or injury in particular—are well known by many insurers and happily covered (selectively or at the right price) by other policies. However, I believe that these risks are known in an aging context that is becoming less relevant, as AI becomes more pervasive in our society. This new, distinct customer group—high-tech companies—will not tolerate a patchwork quilt of coverage from five different policies (and maybe five different insurers). The answer is to carve out these risks from existing lines to create a single unified AI insurance policy, covering the risks related to AI. To insure them will require a new framework for understanding these new and rapidly changing drivers of risk.

A Framework for Assessing AI Risk

For someone looking to understand the risks associated with AI, a good starting point could be the People, Process and Technology model, plus Data. I believe that the total risk being covered will be a function of the quality of the people building the model, the processes embedded within their workflows, the technology that they use and the quality of the input data.

People

Algorithms don’t just appear. They start as the product of work by talented data scientists with PhDs in complex topics that, even with lots of studying, I’ll never even half understand. There are many different factors that could go into assessing the team and its different parts.

Is a PhD really better than an MSc? What about the quality of the institutions that awarded those certifications? Perhaps even more interesting is the question of experience over certificates: the data could show, over time, that a team made up of engineers that previously worked at Microsoft develops AI that makes fewer mistakes than a team composed of engineers from another technology company. (Recall that most of this information is available online, through LinkedIn.) If culture eats strategy for breakfast, maybe it can also define the quality of AI developed by a team of software engineers.

Process

Product teams work within environments that have existing workflows, checks and sign-off processes. Data scientists develop algorithms; data engineers help wrangle large or varied datasets; software engineers build the algorithm into working software. The handovers and the way the data scientist builds something (versus how it’s built in production) is yet another source of risk. If a company spends money on staff and embeds layers of additional checks on code before it is pushed to production (a modern-day risk engineering function, essentially) is it a lower risk than an identical company that doesn’t? Consider the company that commits to shipping over the air updates every month; is it a higher or lower risk than otherwise identical companies, if the code base is changing so often? How does overall risk change if that company’s end-users are largely in locations with poor Internet connectivity and can’t receive the updates?

Technology

Software developers can rattle through hundreds of versions in only a few months. How does risk change across those? Is it better to be using the very latest version of a given technology, or should you always be a couple of versions behind? How does the overall risk change once a company integrates certain third-party software modules, developed by specialists beyond the sphere of control of the original AI developer?

Specialists might do the job better than an insured’s engineers could. For example, shouldn’t security software be developed by real cyber experts? How does the overall risk change when working with one provider over another, and will the consequent impact on insurance pricing lead to favoring specific providers above all others?

Data

Machine learning models are only as good as the training data that goes into them, meaning that the source of the data cannot be ignored. Does the algorithm independently select the type of information to collect, or are humans involved in that process? Do people involved have strict guidelines for managing that process? (First question is discussed in the paper, “Am I An Algorithm Or A Product? When Products Liability Should Apply To Algorithmic Decision-Makers,” by Karni A. Chagal-Feferkorn, Fellow, Haifa Center for Law & Technology, University of Haifa Faculty of Law.)

“I believe that these risks are known in an aging context that is becoming less relevant, as AI becomes more pervasive in our society.”

Does the dataset being used for training reinforce existing biases, and could this later lead to morally questionable (or even illegal) decisions being made? Is data better if it comes from multiple sources and input into the model at random?

If the insured can demonstrate that its data is secure, or if the possibility of it being tampered with by criminally motivated third parties is low, isn’t that a more interesting risk to take than another company that can’t make similar assurances?

How to Bring This to Life?

While the AI risk “platform” isn’t burning, it is warming up. Consider AI failures such as the adult content filtering software that fails to remove inappropriate content, the image tagging software that classified Black people as gorillas, or the robot grabbing auto parts that grabbed and killed a man. All of those cases happened in one year.

Needless to say, insurers can’t do this all by themselves. It takes an ecosystem to solve a problem like this, and the pieces of this puzzle will need to come together over a period of time. The main trigger for a global conversation on the topic of AI insurance will be an AI-triggered event that causes widespread losses, just as has happened with pandemic insurance. For some insurers, it may already be a little too late. As we have seen in the COVID-19 pandemic, business interruption wordings have, in some cases, been too vague. Even though capital was not held against the risk, legal systems may be enforcing (or perhaps strongly encouraging) cover. A similar test of wordings in cyber, product liability, or errors and omissions insurance following an AI-driven disaster may reveal that some insurers have assumed risks that they really did not expect.

Reinsurer Swiss Re recently proposed the idea of requiring companies to release cybersecurity resiliency reports, according to a June 28 article in the Financial Times. This hints at another missing piece that we could soon see emerge: credit-like cyber ratings and a rating agency for AI quality and risk, with “issuer” ratings for the tech company and algorithm ratings for specific pieces of technology. An overall rating (e.g., “BB”) would break down into several different components and would be useful for many different stakeholders: shareholders and prospective investors, suppliers and partners, and of course, end users.

Data could show that a team made up of engineers who previously worked at Microsoft develops AI that makes fewer mistakes than a team composed of engineers from another tech company.

If such an institution were to emerge, this would be an enabler for the insurance industry. These risk ratings would be only one factor for the insurer, perhaps combined with those cyber risk ratings, a company’s claims history and a retention factor. To really bring this to life, however, first on an insurer’s list must be talent. Insuring the risk posed by AI requires a blend of experienced insurance experts and technologists that deeply understand where tech can go wrong. Neither group can facilitate the development of this line of business on its own.

What role will governments play in controlling the development of AI? Will they become more interventionist or leave it to those companies that understand the tech best? Mandatory insurance seems excessive at this stage, and not every piece of AI software needs to be insured. For example, if the output is a recommendation that is reviewed by a human before being acted upon, that’s a very different risk than an algorithm permitted to act and make decisions alone in real time. However, the European Union has obligatory insurance on its radar. In the 2019 report “Liability for AI,” it was stated that “the more frequent or severe potential harm resulting from emerging digital technology, and the less likely the operator is able to indemnify victims individually, the more suitable mandatory liability insurance for such risks may be.”

If insurers were able to develop some confidence around accurately pricing the risk posed by AI, a mix of interesting questions emerge. If the risk associated with AI is insured, would it encourage tech companies to take more risk? Does the overall level of risk become uninsurable, or is it even affordable? How would it be distributed?

One option would be embedded insurance through platforms like AWS and Azure, with premiums calculated automatically based upon usage. The risk could be repackaged into ILS-like securities, too, either on a company-by-company basis or into a diversified portfolio. There is also the question of monitoring, as AI isn’t just about the implementation of fixed, human-designed algorithms. Machine learning involves the creation and updating of its own algorithms. Just like insurers offering telematics to young drivers, should insurers have APIs to monitor a model’s performance—as well as accuracy and drift over time—at all times of the day…and night?

We have really only just begun scratching the surface of what AI can do, but the flip side of that is that we have only just begun to understand the risk of these models and their outputs. The pace at which this space is developing is remarkable, but as usual, the insurance industry will need to be at the forefront of figuring out how to manage the risk involved.

It’s time for insurers to figure out if they want to cover these risks before they find out that they were covering them all along.