As insurers introduce artificial intelligence into pricing and claims handling activities, regulatory focus on disparate impact will grow into “one of the biggest topics of the next 10 years,” a well-known insurance analyst predicted at an industry conference.
V.J. Dowling, managing partner of Dowling & Partners Securities LLC, made the forecast during a session of the 2020 Joint Industry Forum last Thursday titled, “Insurance Vision: Seeing Beyond 2020,” responding to a question from moderator David Sampson, president and chief executive officer of the American Property Casualty Insurance Association.
While U.S. regulators talk about the importance of innovation and enabling technology on the one hand, efforts to restrict “even existing actuarial factors that can be used in underwriting” continue, Sampson said, asking Dowling to comment on what will play out as carriers add data analytics to pricing toolkits.
“What’s fascinating today is how the whole AI comes into play,” said Dowling. Offering a personal example to illustrate the new twist in the disparate impact battle, Dowling imagined image recognition software capturing his facial characteristics as part of a pricing algorithm.
“You don’t even know what it’s doing. But in the end what happens is you get the infamous disparate impact—that certain protected groups end up paying more.”
Photos of Dowling taken by Don Pollard
“And this is just beginning. It’s not just not on underwriting. It’s on claims…What is going to happen when one particular group disproportionally gets sent to the special investigation unit?” he asked.
Related article: NIST Study Finds Racial Bias in Face Scanning Tech
Reviewing some history, Dowling said that 30 years ago, personal lines insurers might have put insured into broad risk buckets to set a price for each. “Technology and data has allowed the number of buckets to increase until arguably, you get to a point where each individual person has their own price based on their specific characteristics. And what that means is, you get a much bigger dispersion of rates from high to low. The subsidization starts going away.”
“Then, on top of that, AI goes in and looks at data and comes up with a price. It’s not saying because you lived here. It’s doing stuff, you don’t even know what it’s doing. But in the end what happens is, you get the infamous disparate impact—that certain protected groups end up paying more.”
Dowling recommended that executives in the audience read a recent blog item written by Daniel Schreiber, the CEO of rental insurer Lemonade, which described the same history of pricing—from large buckets to tiny ones possible through AI. Unlike Dowling, however, Schreiber argues that “algorithms we can’t understand can make insurance fairer” in the blog post titled, “AI Can Vanquish Bias” (also available on LinkedIn).
Following the logic of the article, Schreiber would argue that with AI, each Irishman would not be treated as a stereotypical drinking Irishman. Instead, an AI algorithm that would identify someone’s proclivity to drink and “charge them more for the risk that this penchant actually represents.” In the article, Schreiber actually uses his own religious background to make the point, observing that Jewish people engage in the practice of Shabbat candle-lighting every Friday to usher in the Sabbath and “burn through about two hundred candles over the eight nights of Hanukkah.”
Writes Schreiber: “The fact that such a fondness for candles is unevenly distributed in the population, and more highly concentrated among Jews, means that, on average, Jews will pay more. It does not mean that people are charged more for being Jewish.”
“In common with Dr. Martin Luther King, we dream of living in a world where we are judged by the content of our character. We want to be assessed as individuals.”
Schreiber’s blog post goes on to tell regulators how to recognize unfair pricing, proposing a “uniform loss ratio test” for pricing outcomes. According to Schreiber, a pricing system “is fair—by law—if each of us is paying in direct proportion to the risk we represent.” Regulators can tell whether this is a case since loss ratios will be constant across the customer base when an insurance company charges all customers a rate proportionate to the risks they pose. “We’d expect to see fluctuations among individuals, sure, but once we aggregate people into sizable groupings— say by gender, ethnicity or religion—the law of large numbers should kick in, and we should see a consistent loss ratio across such cohorts. If that’s the case, that would suggest that even if certain groups—on average—are paying more, these higher rates are fair, because they represent commensurately higher claim payouts,” he suggests.
While Schreiber promotes the “uniform loss ratio test” as being “simple, objective and easily administered,” back at the Joint Industry Forum, Dowling suggested that easy answers aren’t coming in the near term. “What’s going to happen when all of a sudden the underwriting and the claims start coming up with disparate impact? Watch this. It’s going to be one of the biggest topics the next ten years,” he said.
“We are only beginning to see what is going to happen.” While West coast technology companies “think they can use this technology, we’re about to have a huge battle” that has already started in New York, he said, referring to a letter that the New York Department of Financial Services wrote to life insurers last year. “If you haven’t read it, you should. It basically says, you can do this, but if it has a disparate impact on the end result, you can’t do it. To me, there was a double negative. You effectively can’t do it,” he concluded.
The actual language of the letter Dowling seemed to be referencing, Insurance Circular Letter No. 1 (2019), issued on Jan. 18, 2019, begins as APCIA’s Sampson suggested, stating that the N.Y. department “fully supports innovation and the use of technology to improve access to financial services.
“Indeed, insurers’ use of external data sources has the potential to benefit insurers and consumers alike by simplifying and expediting life insurance sales and underwriting processes,” the letter says. “External data sources also have the potential to result in more accurate underwriting and pricing of life insurance….”
But continuing on to warn against the use of data or algorithms that introduce disparate impact, the circular letter says that “an insurer should not use an external data source, algorithm or predictive model for underwriting or rating purposes unless the insurer can establish that the data source does not use and is not based in any way on race, color, creed, national origin, status as a victim of domestic violence, past lawful travel, or sexual orientation in any manner, or any other protected class.”
“An insurer may not simply rely on a vendor’s claim of non-discrimination or the proprietary nature of a third-party process as a justification for a failure to independently determine compliance with anti-discrimination laws. The burden remains with the insurer at all times.”
Lawyers commenting on the letter note the more typical application of such rules to homeowners insurers.
The circular letter goes on to state that an insurer should not use any of those innovations “unless the insurer can establish that the underwriting or rating guidelines are not unfairly discriminatory.” The letter also outlines rules about transparency in explanations of pricing results from insurers using predictive models and external data to customers. “The reason or reasons provided to the insured or potential insured must include details about all information upon which the insurer based any declination, limitation, rate differential or other adverse underwriting decision, including the specific source of the information upon which the insurer based its adverse underwriting decision,” and insurers can’t hide behind the “proprietary nature of a third-party vendor’s algorithmic processes to justify the lack of specificity.”
On the topic of international data transparency, Sampson asked a second panelist Hayley Spink, head of global operations at Lloyd’s, to describe the headaches that insurers—and all companies—faced to come into compliance with the general data protection regulation in 2018. Spink explained that GDPR is a regulatory and legal framework that covers how companies handle, collect and process personal data, levying monetary fines for companies that are not in compliance. “Especially in our industry, we deal with personal data all the time and we share that personal data between ourselves and third parties”—MGAs, coverholders, brokers, Lloyd’s, insurers and regulators (for reporting purposes). “So this has had a big big impact across the EU,” she said. GDPR affords individuals protection over how people are using personal data, but “it starts to create a bit of a tension between how we [in the insurance industry] make best use of our customer data to ensure we’re getting them products [they] need, [while] making sure we’re using that appropriately as well.”
Other highlights of the session included Spink and Dowling debating the effectiveness of innovation labs and the value of InsurTechs.
Spink also outlined some game-changing uses of technology, including the use of drones to assess damage after catastrophes and the use of parametric insurance to trigger payments of flood claims.
Dowling discussed industry innovators beyond technologists—powerful insurance distributors and third-party capital providers. At one point, he predicted that technology will enable distributors to package up blocks of premiums and sell them off to third-party capital.