Hallucinations, bias, privacy infringements and intellectual property violations are some of the risks of generative AI that technology and insurance professionals have surfaced over the last year.

But a recent white paper from Munich Re puts environmental risks on the radar screen too.

And a management liability broker executive highlighted some D&O risks related to AI last month.

Munich Re’s white paper, “Insuring Generative AI: Risks and Mitigation Strategies,” published late last week, cites all the typical risks listed in the first paragraph of this article, along with technology—and insurance solutions—to mitigate them. Adding a final category of risks to consider, “Other Risks Including Environmental Risks,” the white paper highlights two of the less-frequently discussed risks of generative AI models: increased energy and water consumption associated with training and retraining of AI models.

“Gen AI models have billions of parameters which need to be trained to improve the models’ performance. As the number of model parameters increases, the energy consumption for one training round rises correspondingly, generating high carbon emissions for a majority of the models,” the white paper explains. The Munich Re paper goes on to cite research dating back to 2019 which revealed that the consumption required for training a so-called transformer model using GPUs (graphics processing units) is more than 626,000 pounds of carbon dioxide—almost five times the lifetime emissions of a car in the United States.

Energy consumption required for training a transformer model using GPUs is almost five times the lifetime emissions of a car in the United States.

In the original research paper referenced in the Munich Re white paper, “Energy and Policy Considerations for Deep Learning in NLP,” (by Emma Strubell, Ananya Ganesh, Andrew McCallum, 2019) notes that NLP (natural language processing) models “could be trained and developed on a commodity laptop or server” a decade before the paper was published. But such models “now require multiple instances of specialized hardware such as GPUs or TPUs” (tensor processing units). “Even when these expensive computational resources are available, model training…incurs a substantial cost to the environment due to the energy required to power this hardware for weeks or months at a time,” the 2019 paper said.

The Munich Re white paper, compiling information from risk research reports and describing possible mitigations, also adds water consumption to the list of potential risks of Gen AI, citing a 2023 research report titled “Making AI Less ‘Thirsty’: Uncovering and Addressing the Secret Water Footprint of AI Models” (by Pengfei Li, Jianyi Yang, Mohammad A. Islam, and Shaolei Ren, 2023)

“Training could ‘cost’ 700,000 liters of clear freshwater, equal to the daily drinking water needs of 175,000 people,” the Munich Re white paper says, summarizing a key finding.

The original research paper explains that data centers use an enormous amount of water for both on-site cooling (scope 1) and off-site electricity generation (scope 2). The researchers equate the 700,000 liters referenced by Munich Re to the amount of clean freshwater consumed to train GPT-3 in Microsoft’s state-of-the-art U.S. data centers, and also note that “global AI demand may be accountable for 4.2-6.6 billion cubic meters of water withdrawal in 2027.” According to the report, this is more than half the annual water withdrawal for the United Kingdom.

Translating AI use of water into human terms, the researchers note that “GPT-3 needs to ‘drink’ (i.e., consume) a 500ml bottle of water for roughly 10-50 responses, depending on when and where it is deployed.”

In this, and other risk areas, Munich Re suggests that insurers have a role to play. “Insurance providers could play a role in ensuring responsible model development. By establishing guidelines for balancing training frequency and energy and water consumption, insurance companies could aid in ensuring the right balance between continuous improvement of the Gen AI models and an environmentally sustainable development.”

AI Washing and Other D&O Risks

Separately last month, during a webinar about the Top D&O Stories for 2023, D&O expert Kevin LaCroix put AI risks and environmental risks together in another novel way—one that could give rise to D&O losses.

LaCroix, an attorney and executive vice president at RT ProExec, as well as author of the D&O Diary Blog, predicted that within a year, D&O insurers will be dealing with the first real examples of corporate and securities litigation related to disclosures about AI use and board oversight of AI programs. Explaining a possible litigation scenario during the webinar, LaCroix said: “AI is a hot topic. A lot of companies are going to tout their use of AI. And just as companies trying to show their [environmental] sustainability credentials got ahead of themselves and were accused of greenwashing, they may fall into ‘AI-washing,'” he said.

LaCroix borrowed the “AI-washing” terminology from SEC Chair Gary Gensler last year, who drew an analogy to greenwashing as he warned that companies should not mislead investors by exaggerating AI capabilities in a speech in December. (LaCroix discussed Gensler’s remarks in detail in a Dec. 6 blog item.)

While executives may want their companies to seem “trendy, progressive and ready to greet the new age,” with statements about AI successes, some may be “getting ahead of themselves because AI is not that transformative or its not doing much to change their operations,” LaCroix said. Adding to Gensler’s warnings, LaCroix himself sees another disclosure problem related to AI—rather than overstating their AI capabilities, companies may understate the risks that their companies may be disrupted by AI or fail to disclose that competitors are adopting AI tools more rapidly.

LaCroix connected more D&O risks to AI, noting that increasing amounts of AI regulation present possibilities that companies will fail to comply and also noting that board duties of oversight are now expanded to include oversight of AI uses. He imagined, as an example, a company using images or content generated by AI that subsequently faces allegations that it used such content deceptively. In addition to consumer claims and privacy claims, this company could face a D&O action alleging that the company’s “board was not sufficiently monitoring” and overseeing a critical corporate function, LaCroix said.

Are Gen AI Risks Insurable?

Lawsuits alleging privacy and IP violations have already emerged. In their white paper last week, Munich Re underwriters and research scientists said that privacy and IP risks, and other more obvious risks of using Gen AI—hallucinations, bias, harmful conduct—could be insurable. They also note the presence of “silent AI” coverage in traditional insurance policies that don’t exclude AI risks.

At Munich Re, specific insurance risk transfer solutions for Gen AI tools are in the works—an extension of the work that the global reinsurer has done for six years with AI performance guarantees known as aiSure for machine learning models. The white paper first reviews examples of aiSure products under which AI providers and creators of home-grown AI tools transfer risks that machine learning models either fail, perform below contracted levels or fuel discrimination lawsuits.

Carrier Management has also previously reported on aiSure policies in these articles:

“Munich Re considers certain risks associated with Gen AI models to be insurable under defined conditions (such as continuous performance monitoring and clear metrics),” the reinsurer stated in an email introducing the new white paper.

“Over time, as the uncertainty in law is lifted, Munich Re aims to extend the insurance from a performance-based insurance solution to a full liability insurance solution,” the white paper says in the section about IP and privacy risks.”

In a media statement, Michael Berger, head of Insure AI Team at Munich Re, said, “Our performance-based insurance products cover financial losses arising from generative AI models, for example, if the models are hallucinating. We are continuously working with customers on expanding coverage to other challenging risks of generative AI applications, including copyright infringement or discriminating output.”

The white paper highlights some of the differences between machine learning models and gen AI models that make the design of insurance products trickier but not impossible.

“Compared to traditional machine learning models, measuring the performance of Gen AI models is less straightforward. Reasons are the differences in training (leading to more complex outcomes), the variety of tasks Gen AI models are confronted with, differences in setup (most Gen AI models are based on foundation models), and finally the subjectivity of the quality of these outcomes (judging the ground truth),” the report says. This means “the underperformance of a Gen AI application must be defined well in order to capture the essential differences,” the white paper states.

Addressing individual risks in turn, the white paper groups hallucinations, false information and harmful content risks together in one section to describe some obstacles—and solutions—common to insuring Gen AI against these risks. Unlike traditional machine learning models trained in a supervised fashion and producing predicted values for given input data, Gen AI models follow an unsupervised or semi-supervised training, and they also respond creatively. In addition, a single Gen AI model used for a wide scope of use cases may perform some tasks well and others miserably, and output may degrade over time (as foundation models are updated or data sets fine-tuned).

The combination of problems will force underwriters “to amend processes to include continuous monitoring and regularly updating insurance policies,” the white paper says, also recommending model performance evaluations be tied to single specific task, among other fixes. For Gen AI application providers that use APIs (application programming interfaces) of foundation models (like GPT-4), degradation risks could be addressed by pausing guarantees or policies when degradation in a foundation model is expected or detected, or by adjusting the guarantee threshold or policy fee as risks change, the white paper says.

The Munich Re white paper similarly lists some of the problems that AI insurance underwriters are working to solve in developing policies to insure against model bias, as well as IP and privacy violations (such as broad legal definitions of discrimination or IP rights). In two sections of the white paper, the authors mention a technology fix for privacy risks—”differential privacy” (“adding noise to individual data records before input is fed into [a] model,” making it difficult to recover raw data from model output).

“Over time, as the uncertainty in law is lifted, Munich Re aims to extend the insurance from a performance-based insurance solution to a full liability insurance solution,” the white paper says in the section about IP and privacy risks.

The energy and water consumption risks remain outside of the aiSure scope of coverage for now. “As environmental impacts and other risks are still being explored, including their meaning for society, Munich Re is currently not insuring these risks. However, as these risks evolve and the wider Gen AI risk landscape becomes clearer, Munich Re will continue to co-develop risk transfer solutions with its clients,” the white paper says.

Commenting on traditional policies, the white paper notes AI risks are seldom excluded in traditional insurance policies. That means that AI-based machinery injuring bystanders could be covered by existing general liability policies, hacked AI models could be covered by existing cyber insurance, AI robots that destroy property could be covered under property insurance policies, and AI models making biased employment decisions could be covered under employment practices liability insurance policies.

Still, potential gaps in coverage for insureds and underpriced policies for insurers make these less than ideal, the white paper suggests, predicting the “broad demand” for specifically insuring Gen AI “will arise.”