Pennsylvania has issued guidance for insurance companies’ use of artificial intelligence systems (AIS) based on a model adopted by the National Association of Insurance Commissioners (NAIC).

The Pennsylvania Insurance Department issued Notice 2024-04 which includes recommended best practices for how insurers obtain, develop and use certain AI technologies and systems, and advises insurers on what information the insurance department may request during an investigation or examination.

NAIC adopted its model bulletin in December of last year. Pennsylvania joins Connecticut, New Hampshire, New York, Illinois, Nevada , Rhode island, Vermont and Alaska in adopting NAIC’s bulletin or similar guidance.

According to the Pennsylvania Insurance Department (PID) notice, the goal is not to prescribe specific practices or documentation requirements but to ensure that insurers are aware of the department’s expectations regarding AI programs and use.

AI technology is used to replace or supplement human intelligence in gathering information, analyzing data, running models, and making decisions. According to the NAIC and other analysts, AI may affect the insurance industry in multiple ways. AI is increasingly being deployed in marketing, customer service, underwriting, claims, fraud investigations and other areas. One example is the use of chatbots in customer service; another is the use of sensors, data and images to measure property damages and predict repair costs.

“Speeding Train’

“Technology is always evolving and is a great tool to help streamline processes. That said, PID always aims to make certain that insurers are informed on pertinent considerations for using technological advances in a manner that is fair to consumers and is in compliance with current law,” said Pennsylvania Insurance Commissioner Michael Humphreys. “AI is no exception. This notice provides insurers with the guidance to help ensure accurate and fair outcomes for Pennsylvanians when using AI.”

AI and the Future of Insurance

A 2021 article published by the consulting firm McKinsey — Insurance 2030—The impact of AI on the future of insurance— describes a scenario in 2030 where a man’s personal assistant orders him a self-driving vehicle but he decides to drive himself. His vehicle advises him of the safest route. When he pulls into the parking area, he hits a sign. He takes pictures of the front bumper damage. The screen on the dash confirms his claim has been approved, and a drone has is one the way to inspect if the vehicle is drivable.

The McKinsey authors comment that the technologies in this scenario already exist and with emerging AI technologies have the potential to mimic the human mind. They write that AI also has the potential to revolutionize insurance:

“In this evolution, insurance will shift from its current state of “detect and repair” to ‘predict and prevent,’ transforming every aspect of the industry in the process. The pace of change will also accelerate as brokers, consumers, financial intermediaries, insurers, and suppliers become more adept at using advanced technologies to enhance decision making and productivity, lower costs, and optimize the customer experience,” the authors comment.

The “predict and prevent” capability enabled by AI has been embraced by other industry leaders.

Peter L. Miller, president and CEO of The Institutes, writing in Carrier Management concludes that predicting and preventing catastrophic events, as well as the day-to-day risks, is critical to the economic sustainability of insurers. In addition, he suggests that if there are ways to prevent the devastation of major events, then members of the risk management and insurance community “have an ethical and moral responsibility” to do so.

“It’s an effort to tap the brakes of this speeding train down the tracks, to tell carriers that we already have laws on the books that address so many of these issues,” is how attorney Bruce Baty described the NAIC’s regulatory guidance in a recent article — Regulators Run Alongside Speeding AI Train: What the NAIC Model Bulletin Means for Insurers— published by Carrier Management.

In the end, what the state insurance commissioners’ committee created is a “subtle reminder that we have market conduct laws on the books and that we fully intend to utilize those market conduct tools to come in and investigate how are you using AI so that you are mitigating the risk to consumers with respect to unfair discrimination,” Baty added.

Setting Expectations

Indeed, the guidance reminds insurers that decisions supported by AI must comply with all applicable insurance laws and regulations. These statutes and regulations include those on unfair insurance trade practices and claims settlements as well as those requiring insurers to report annually on governance practices the insurer’s corporate governance structure, policies and practices.

The department also expects that decisions backed by AI must comply with laws requiring property/casualty insurance and workers’ compensation rates not be excessive, inadequate or unfairly discriminatory.

“The requirements of these acts apply regardless of the methodology that the insurer used to develop rates, rating rules and rating plans subject to those provisions. That means that an insurer is responsible for assuring that rates, rating rules and rating plans that are developed using AI techniques and predictive models that rely on data and machine learning do not result in excessive, inadequate or unfairly discriminatory insurance rates with respect to all forms of casualty insurance—including fidelity, surety and guaranty bond—and to all forms of property insurance—including fire, marine and inland marine insurance, and any combination of any of the foregoing,” the notice states.

The notice makes clear that an insurer’s AI conduct is “subject to investigation, examination and market analysis ” by state regulators.

Also, all insurers that use AI systems are expected to maintain a written program for the responsible use of AI. The AI program should be designed to “mitigate the risk of adverse consumer outcomes, including, at a minimum, to maintain compliance with the statutory and regulatory provisions.”

Best Practices

The department provides some guidance on best practices for an insurer’s use of AI, which the department says are not intended to be binding upon insurers, nor restrictive of the department’s oversight.

Some of the guidelines suggest that an AI program should:

  • be designed to mitigate the risk that the insurer’s use of an AI system will result in adverse consumer outcomes.
  • address governance, risk management controls and internal audit functions.
  • vest responsibility for the development, implementation, monitoring and oversight of the AIS program and for setting the insurer’s strategy for AI systems with senior management accountable to the board or an appropriate committee of the board.
  • identify and address the use of all AI systems across the insurance life cycle, including areas such as product development and design, marketing, use, underwriting, rating and pricing, case management, claim administration and payment, and fraud detection.
  • include processes and procedures providing notice to impacted consumers that AI systems are in use and provide access to appropriate levels of information based on the phase of the insurance life cycle in which the AI systems are being used.

An insurer’s AI program may be independent of or part of the insurer’s existing enterprise risk management program. The AI program may utilize a framework or standards developed by an official third-party standard organization. Insurers should require any third party to cooperate with the insurer with regard to regulatory inquiries and investigations.

Governance Framework

The department also wants insurers to have a governance framework for the oversight of AI systems they use. This should include policies, processes and procedures, including risk management and internal controls, that are to be followed at each stage of an AI system life cycle, from proposed development to retirement. The framework should also address the scope of responsibility and authority, chains of command and decisional hierarchies.

With respect to predictive models, an insurer should include a description of methods used to detect and address errors, performance issues, outliers or unfair discrimination in the insurance practices resulting from the use of the predictive model.

Big Data and Compliance

In NAIC’s commentary on the use of AI in insurance, the organization notes that insurers have a “treasure-trove of big data, the main ingredient AI requires to be successful.” This data can be leveraged through AI to increase customer engagement, create more personalized service and marketing, and match the customers with appropriate products. Looking ahead, NAIC says, AI will “enable insurers to move from a “detect and repair” framework to a “predict and prevent” framework, allowing insurers to help their customers manage their risks and avoid claims altogether.”

Conning & Co. recently surveyed insurance executives on their use of AI and found that nearly two thirds indicated they are using large language models in sales and underwriting and AI could soon become the most widely adopted technology in operations. Conning analysts said that the overwhelming amount of diverse data being collected by insurers now hinders their profitability and AI is seen as a solution to this problem.

The Conning study cited three challenges insurers will have to overcome, one being regulatory compliance. However, it concluded that as long as insurers stay abreast of applicable regulations and data privacy issues, the industry is on the verge of a “great transformation.”