Free Preview

This is a preview of some of our exclusive, member only content. If you enjoy this article, please consider becoming a member.

Artificial intelligence is reshaping insurance underwriting.

Submission intake, triage, routine data collection and pricing models for low-complexity risks are increasingly automated, while generative tools can draft policy wordings.

Executive Summary

Carriers that treat AI governance as infrastructure, not paperwork, will stay ahead of regulatory hurdles while gaining consumer trust.

Yet beneath the promise of speed and efficiency lies a thornier question: How can insurers ensure that AI-driven underwriting remains fair, transparent and accountable?

The industry’s answer will determine whether these new systems drive growth, or strain under regulatory and reputational scrutiny.

The New Rulebook

In the United States, the National Association of Insurance Commissioners (NAIC) adopted its Model Bulletin on AI Systems in December 2023. Now embraced by nearly half of U.S. states, it requires insurers to maintain board-approved governance programs, conduct internal audits and document the use of third-party AI. Though principles-based and flexible, the framework signals that regulators expect the same rigor around AI as around solvency or consumer protection.

Related articles: NAIC’s AI Model Bulletin in Brief; Why All the Fuss? Fairness Regulations Meet Insurers’ Growing Use of AI

The trend is not unique to North America. Europe has issued guidelines that stress fairness, non-discrimination and explainability and require insurers operating in the EU, or using models that touch EU markets, to comply, regardless of where the system was developed.

With regional carriers seeking to expand their footprints to break into the national tier, and global insurers increasingly subject to both U.S. and European oversight, AI governance must be treated as core infrastructure, not a discretionary add-on.

Bias at Scale

AI’s power lies in recognizing patterns across vast datasets. But left unchecked, it can also encode and amplify bias. Backed by actuarial data showing that men were statistically more likely to have accidents than women, a rules-based engine in the UK once offered cheaper auto premiums to women than men with otherwise identical profiles. This practice was later banned by the EU. But generative models raise even sharper concerns: What happens if a system infers that young men from a certain race or socio-economic background present higher risks, and prices accordingly?

Such outcomes may be statistically valid but are legally and ethically not permitted. Regulators are not only asking insurers to avoid discrimination; they are demanding that every pricing decision be explainable. In practice, this means underwriters must be able to articulate why two customers received different quotes, backed by transparent data lineage and human validation.

But governance is not just about satisfying examiners. Consumer trust is fragile, and accusations of algorithmic bias can spiral quickly into reputational crises. A mispriced book can be corrected; a brand tarred as discriminatory has a harder job recovering.

Principles for AI in Underwriting

Across jurisdictions, three principles recur:

  1. Fairness and Non-Discrimination: Systems, algorithms and prompts must be tested to ensure they do not disadvantage customers by gender, ethnicity or socioeconomic status.
  2. Transparency and Explainability: Pricing logic cannot remain a black box. Audit trails, documentation and model interpretability are essential.
  3. Human Oversight: AI can accelerate workflows, but underwriters must retain authority to validate, override and justify outcomes.

These pillars may seem like given, but in reality they require new capabilities: robust data governance, monitoring tools and underwriters trained to supervise algorithms as carefully as they once reviewed submissions.

What Are the First Steps Carriers Should Take?

Carriers looking to integrate AI into underwriting cannot treat governance as a side project. It begins with enterprise-wide frameworks, approved at the board level, that spell out how models are sourced, deployed, monitored and audited. Some firms are already creating AI oversight committees that sit alongside risk and audit committees, ensuring algorithms receive the same scrutiny as financial controls. Elevating governance from “IT hygiene” to strategic infrastructure signals to regulators and clients alike that oversight is not optional.

Carriers looking to integrate AI into underwriting cannot treat governance as a side project.

Equally important is the treatment of external vendors. Insurers often rely on third parties for risk scoring, fraud detection or automated triage, but too many assume those “black boxes” are compliant out of the gate. Regulators will not make that distinction. Leading carriers are now demanding documentation of training data, bias testing results and performance metrics, subjecting third-party models to the same validation cycle as internal solvency models.

AI investments must also focus on explainability. A risk score has little value if no one can explain how it was reached. The ideal tools can surface the factors that contributed most to a premium increase, allowing an underwriter to tell a client: “Your rate rose primarily due to increased claims frequency in your area and property age.” This level of transparency not only satisfies regulators but also builds trust with brokers and policyholders who might otherwise view AI as arbitrary or unfair.

The role of the underwriter must evolve in parallel. Tomorrow’s professionals will need to be as comfortable interrogating model outputs as they are reviewing policy language. Some carriers are already experimenting with data literacy programs that teach underwriters to spot anomalies, such as a model disproportionately flagging risks in one geography, and escalate them before they become systemic. Others may go further, pairing junior underwriters with data scientists for “rotation weeks,” giving each side exposure to the other’s methods and mindsets.

Finally, human-in-the-loop controls remain essential. Not every case can, or should, be automated. Effective workflows allow AI to handle routine submissions while flagging outliers for human review. If 90 percent of low-complexity commercial submissions flow straight through, the remaining 10 percent should go to a specialist underwriter who can negotiate coverage terms or identify unique risk factors. This ensures efficiency without sacrificing judgment, while also giving underwriters meaningful opportunities to apply their expertise.

Governance Is Not a Brake on Innovation

AI promises to make underwriting faster, smarter and more efficient. Yet speed without safeguards will only invite backlash. The industry’s future will not be decided solely by model accuracy or processing times but by whether insurers can prove that their systems are fair, transparent and subject to human judgment.

Good governance should not detract from innovation; but it is the condition for sustainable progress. The carriers that internalize this lesson will not just keep regulators at bay—they will earn the trust needed to thrive in an AI-driven market.