As we brainstormed content for Carrier Management’s second-quarter 2023 edition, with features about profit-making engines of P/C insurers and features about the uses of AI, I didn’t sense a theme would emerge tying two seemingly disparate topics together.

Then the word “alignment” kept popping up—aligning values of people or machines in some chosen way to achieve desired outcomes.

The idea is especially prominent in articles about one of the industry’s most profitable carriers, RLI, which has recorded 27 straight years of underwriting profit. The secret sauce? Benefits like a performance-based ESOP, performance-based 401(k), and long-term incentive bonuses that keep associates focused on combined ratio and ROE goals in hard and soft markets—and also aligned to achieve broad initiatives tracked on a strategy scorecard, according to COO Jen Klobnak.

At American Modern, CEO Andreas Kleiner said a strategy scorecard is a tool he’s used a number of times over the course of his career, most recently to lead the personal lines specialty carrier through a multiyear business transformation that included technology integration, as well as a redesign of all of the insurance products and interfaces with agents and customers. “It’s a perfect tool to track your execution, and it’s a perfect tool to communicate your strategy—and make sure that you get your whole organization aligned to your strategy,” he said, referring specifically to the Balanced Scorecard developed Drs. Robert S. Kaplan and David P. Norton.

The transformation put American Modern’s 2022 combined ratio more than 8 points below the market, he reported. Separately, Bermuda specialty insurer and reinsurer SiriusPoint reported its first quarterly profit since mid-2021 during this year’s first quarter. A turnaround in the works has a cultural alignment at its core, according to CEO Scott Egan, who noted the need to integrate predecessor companies Third Point Re and Sirius Group, which came together in 2021. “There’s more that we can do to work as one team globally, with one set of values, one approach and consistency,” he said. “[W]e are focused on creating a performance culture that rewards underwriting performance and aligns closely with shareholder value creation.”

This all made perfect sense to me when I read the articles about carrier profit-making strategies. But the term AI alignment in our technology articles left me with more questions than answers.

ChatGPT’s dismal performance on a math problem, its willingness to generate code to perpetuate gender and racial biases, and its potential to be duped into providing a road map for criminal activity were three examples of “misaligned AI responses” presented during a recent webinar. But what exactly is AI alignment?

It’s a complicated topic, but definitions from Wikipedia, ChatGPT and scholarly papers coalesce around something like this: “AI alignment refers to the field of research focused on ensuring that AI systems are developed and deployed in a way that aligns with human values and goals.” (Source: From Risk to Reward: The Role of AI Alignment in Shaping a Positive Future (su.org))

In some references, the alignment problem is described with reference to the example of an AI system designed to maximize the production of paper clips. Often attributed to Oxford Philosopher Nick Bostrom, the idea here is that the machine, lacking the values of humans for the world’s resources or human life, will destroy the world in its eternal quest to make paper clips (including humans that might not agree with the paper clip production obsession).

While most readers will agree that human life is more precious than a paper clip, do we all agree on the same human values? Admittedly, I am simplifying a complex thought experiment about the superpowers of AI, but I can’t help wondering: If the goal of AI alignment is to align AI decisions with human values, who will decide what those human values are?

In an interview on Fox News in April, Elon Musk announced his plans to create TruthGPT, a competitor to OpenAI’s ChatGPT’s and Google’s Bard. Musk said his tool will be a “maximum truth-seeking AI that tries to understand the nature of the universe.” He added, “This might be the best path to safety in the sense that an AI that cares about understanding the universe is unlikely to annihilate humans because we are an interesting part of the universe.” In short, TruthGPT’s values will be aligned with all humans, not paper clip makers.

OpenAI says its aim is “to make artificial general intelligence aligned with human values and to follow human intent” also. “Open AI’s mission is to ensure that artificial general intelligence (AGI)—by which we mean highly autonomous systems that outperform humans at most economically valuable work—benefits all humanity.”

Setting aside the myriad questions around AGI, I focus on the “all humanity” promises. Before he vowed that TruthGPT will “understand the nature of the universe,” Musk said, “I’m worried about the fact that [ChatGPT] is being trained to be politically correct, which is simply another way of saying untruthful things.”

Is Musk’s truth the truth we all believe? Is there any person or group well-suited to determine the values with which emerging AI tools should be aligned?

The concept of aligning values within a P/C insurance company for profit goals seems easy by comparison.

***

This opening note from Carrier Management’s second-quarter magazine, “P/C Insurance Profit Making,” introduces feature articles about P/C carrier profit success stories (and works in progress) that involve aligning their workforces around common goals, as well as features about the use of AI in insurance.

The carrier stories include:

The AI articles include:

All of the articles in the magazine are available on the magazine page of our website.

Click the “Download Magazine” button for a free PDF of the entire magazine.

To be able to read and share individual articles more easily, consider becoming a Carrier Management member to unlock everything.