New York State is proposing guidance for insurance carriers to prevent unfair discrimination in their use of artificial intelligence and external data sources in underwriting and pricing.

A circular letter from Superintendent of Financial Services Adrienne A. Harris outlines the responsibilities of insurers to make sure that their uses of artificial intelligence (AI) and external consumer data and information sources (ECDIS) do not violate laws against unfair discrimination. The department is seeking feedback on the proposal.

Harris said that the Department of Financial Services (DFS) recognizes that AI and ECDIS can benefit both insurers and consumers by simplifying underwriting and pricing and potentially making them more accurate. However, the regulator is concerned that the self-learning behavior of AI may increase the risks of unfair or unlawful discrimination that may “disproportionately impact vulnerable communities or otherwise undermine” the New York insurance market. Also, DFS is concerned that external data may vary in accuracy and come from sources not regulated or subject to consumer protections.

“Technological advances that allow for greater efficiency in underwriting and pricing should never come at the expense of consumer protection,” she said. “DFS has a responsibility to ensure that the use of AI in insurance will be conducted in a way that does not replicate or expand existing systemic biases that have historically led to unlawful or unfair discrimination.”

Insurers are currently reviewing the proposal. “We think it is important that New York regulators have issued the circular letter on a proposed basis and are encouraging feedback on the proposal. We look forward to working with DFS to ensure innovation can occur in the New York marketplace to the benefit of both consumers and insurance companies,” New York Insurance Association President Ellen Melchionni said in a statement to Insurance Journal.

The DFS is requesting feedback on its proposed guidance by March 17, 2024. Comments may be submitted to

What Is Expected

The circular letter outlines DFS’s expectations for how insurers manage the integration of external data sources, artificial intelligence and other predictive models to mitigate potential harm to consumers. The DFS proposal says insurers are expected to:

  • Analyze ECDIS and AI for unfair and unlawful discrimination.
  • Demonstrate the actuarial validity of ECDIS and AI.
  • Maintain appropriate oversight of the insurer’s use of ECDIS and AI.
  • Maintain appropriate transparency, risk management and internal controls.

The letter stresses that insurers cannot rely on assurances from their third-party vendors that their uses of AI or ECDIS do not contain biases that lead to unlawful or unfair discrimination. “The responsibility to comply with anti-discrimination laws remains with the insurer at all times,” the proposal states.

Under the guidance, insurers must maintain clear lines of responsibility, comprehensive documentation, and clear consumer disclosures relating to their use of these technologies and data.

NAIC Releases Highly Anticipated Draft Model Bulletin on Insurers’ AI Use

While setting forth its recommendations, DFS stated that it “recognizes there is no one-size-fits-all approach to managing data and decisioning systems.” The letter adds that insurers should take an approach that is “reasonable and appropriate to each insurer’s business model and the overall complexity and materiality of the risks inherent” in using ECDIS and AI.

In December 2023, state insurance commissioners voted to approve a model regulation governing use of AI in insurance. This model developed and approved by the National Association of Insurance Commissioners (NAIC) also addresses governance, documentation, third parties and risk management issues around the use of AI. In 2020, the NAIC released Principles of Artificial Intelligence that provides additional guidance for insurers.

Also in December, the Financial Stability Oversight Council, which comprises top financial regulators, warned in its annual report that AI could create new risks for the U.S. financial system if the technology is not properly supervised. The panel warned that while AI could spur innovation or efficiencies, the rapidly advancing technology requires vigilance from both the companies and their watchdogs.