Free Preview

This is a preview of some of our exclusive, member only content. If you enjoy this article, please consider becoming a member.

A month-and-a-half after the NAIC adopted a model bulletin on carriers’ use of Artificial Intelligence Systems, the New York Department of Financial Services issued a circular letter on the “Use of Artificial Intelligence Systems and External Consumer Data and Information Sources in Insurance Underwriting and Pricing.”

The document, like the NAIC bulletin, sets forth expectations for governance, risk management and internal controls related to the use of AI System and consumer data (ECDIS). Beyond that, the New York circular, which is specifically focused squarely on the two P/C insurance operations of pricing and underwriting, sets forth “Fairness Principles” regarding “actuarial validity” of data, and details the department’s expectations that insurers will perform both quantitative testing and qualitative assessments for unfair or unlawful discrimination in the data or models.

The “Fairness Principles” section includes the following guidance (emphasis added):

  • “An insurer should not use ECDIS or AIS for underwriting or pricing purposes unless the insurer can establish that the data source or model…does not use and is not based in any way on any class protected” under New York law.
  • Analysis of actuarial validity of data “should demonstrate a clear, empirical, statistically significant, rational, and not unfairly discriminatory relationship between variables used and the relevant risk to the insured.”
  • Insurers must be able to demonstrate that ECDIS are not prohibited by Insurance Law.
  • Insurers should be able to demonstrate that ECDIS “do not serve as a proxy for any protected classes…”
  • “An insurer may not rely solely on a vendor’s claim of non-discrimination or a proprietary third-party process to determine compliance with anti-discrimination laws.”
  • “An insurer should not use ECDIS or AIS in underwriting or pricing unless the insurer can establish through a comprehensive assessment that the underwriting or pricing guidelines are not unfairly or unlawfully discriminatory in violation of the Insurance Law.
  • At a minimum, a comprehensive assessment includes “assessing whether the use of ECDIS or AIS produces disproportionate adverse effects…on similarly situated insureds, or insureds of a protected class”; assessing whether there is any “legitimate, lawful, and fair explanation or rationale” for any disproportionate effects; searching—and documenting a search for—less discriminatory alternatives and modifying AIS and ECDIS accordingly.
  • The circular lists five statistical metrics that insurers can consider for fairness assessments.

Importantly, the circular states that in addition to quantitative analysis, “insurers’ comprehensive assessment should include a qualitative assessment of unfair or unlawful discrimination. This includes being able to explain, at all times, how the insurer’s AIS operates and to articulate the intuitive logical relationship between ECDIS and other model variables with an insured or potential insured individual’s risk.”

***

Related articles:

Related video:

Hear what policyholder lawyers Carolyn Rosenberg and Anthony Crawford are saying about the New York AI circular in the recent podcast interview with CM Deputy Editor Elizabeth Blosfield, “Regulating the Future with AI and Insurance,” starting 5 minutes into the interview (Excerpt included below).