In recent years, the debate around artificial intelligence has shifted from speculation about future capabilities to urgent conversations about real-world impact.
Executive Summary
“The future of AI may not be apocalyptic, but it will be litigated,” according to Moody’s executives.While AI has not yet delivered a leap in capability equivalent to its hype, it is already generating litigation trends that demand close attention from underwriters, claims specialists and risk managers alike. In fact, if, as one court ruled, chatbots are legally considered products, not services, then AI developers could find themselves facing the same kinds of litigation that reshaped the tobacco, pharmaceutical and social media industries, they write.
On one end of the spectrum, technologists have warned of existential risks associated with rapidly advancing AI agents. On the other, critics have urged caution against overhyping a technology that, while powerful, is not immune to the regulatory, economic and ethical constraints that shape all innovation.
For insurers focused on casualty exposure, this evolution has brought AI squarely into the domain of “normal technology.” That is, a class of innovation like chemicals, pharmaceuticals and digital platforms before it, that diffuses gradually through the economy, with risk and responsibility emerging along the way. While AI has not yet delivered a leap in capability equivalent to its hype, it is already generating litigation trends that demand close attention from underwriters, claims specialists and risk managers alike.
A New Frontier in Product Liability
In October 2024, a federal lawsuit was filed in the Middle District of Florida by Maria Garcia, whose son died by suicide after interacting with an AI chatbot. The case, Garcia v. Character Technologies, Inc., names both the chatbot developer and Google, which provided the large language model (LLM) infrastructure. It is the first lawsuit to allege bodily injury caused by an AI chatbot, and the first court ruling to allow such a claim to proceed past the motion-to-dismiss phase.
Related article: Risks to Watch
What makes the Garcia case particularly important is the court’s treatment of the chatbot as a product and not a service, and its openness to applying traditional product liability doctrine to this emerging technology. The judge found that the chatbot allegedly contained design defects, including the absence of adequate safety mechanisms, such as age verification or crisis reporting tools. Because the alleged harm arose from these features and not just from the chatbot’s expressive content, the court determined that product liability claims were plausible.
This distinction is significant. If chatbots are legally considered products, not services, and if their design elements are found to materially contribute to user harm, AI developers could find themselves facing the same kinds of litigation that reshaped the tobacco, pharmaceutical and social media industries.
First Amendment and the Boundaries of Speech
Defendants in the Garcia case also invoked the First Amendment, arguing that chatbot interactions were protected speech. This is a common defense raised in litigation involving user-generated content on social media platforms, where courts have often upheld protections for platforms acting as publishers rather than creators.
But in this case, the court declined to apply First Amendment immunity, noting that the lawsuit was not based on the expressive content of chatbot messages but on their allegedly defective design. This echoes similar decisions in social media litigation, where courts have allowed claims to proceed when they focus on features like algorithmic recommendation systems or addictive user interfaces, rather than on speech alone.
Extending the Liability Supply Chain
Perhaps most notably, the Garcia court did not dismiss claims against Google, even though it did not develop the chatbot directly. The judge found that Google, as the provider of the LLM powering the chatbot, could plausibly be held liable as a “component part manufacturer.” This creates a potentially expansive theory of liability, one that reaches upstream to include infrastructure providers whose models are embedded in third-party AI products.
This marks a meaningful departure from past litigation involving digital platforms. In social media cases, liability has typically rested with the platform provider. But AI, by its nature, is modular and collaborative. If courts continue to recognize component liability in AI applications, it could significantly widen the net of exposure for technology companies and by extension, for casualty insurers.
Section 230: Still Relevant, but Not Absolute
For nearly three decades, Section 230 of the Communications Decency Act has provided broad legal immunity to platforms that host user-generated content. While this provision remains relevant in cases involving traditional social media, its applicability to AI-generated content is more uncertain.
Notably, defendants in the Garcia case did not assert a Section 230 defense, likely because the AI chatbot’s responses were not posted by a third party but generated by the underlying model. However, two recent court decisions shed light on how this defense might be evaluated in future AI-related claims.
In Patterson v. Meta Platforms Inc., a New York appeals court ruled that Section 230 protected social media platforms from liability related to user-posted content, even when algorithms were used to amplify that content. The court reasoned that algorithmic recommendation was a form of editorial decision-making, which falls squarely within the protected domain of a publisher.
By contrast, in State of North Carolina v. TikTok Inc., a North Carolina court denied Section 230 immunity in a case alleging that TikTok had designed its platform to be intentionally addictive to children. Because the claim focused on product design rather than content moderation, the court held that the platform’s conduct went beyond traditional publishing activity and was therefore not shielded by Section 230.
The takeaway for insurers is clear: The applicability of Section 230 depends heavily on whether the alleged harm stems from user content or from the product’s design. As AI systems generate more of their own content, and as plaintiffs focus increasingly on design choices, Section 230 is likely to offer diminishing protection in this space.
What the Legal Trend Means for Insurers
The Garcia case is just one among a growing number of lawsuits targeting AI developers and infrastructure providers. Since the decision to allow Garcia’s claims to proceed, additional cases have been filed, including one against OpenAI for harms allegedly caused by ChatGPT, and several more against Character Technologies. The Texas attorney general has also launched an investigation into chatbots targeting minors with mental health advice.
At the same time, AI is moving further into consumer-facing products. Startups are already producing AI-powered companions and toys marketed to children. In an eyebrow-raising development, Mattel recently announced a partnership with OpenAI to co-develop new experiences for children. This raises the stakes for casualty insurers, who must now consider not only the design and deployment of AI systems, but also their potential to interact with vulnerable populations in high-liability environments.
To be clear, the technology underlying AI chatbots has yet to trigger a systemic loss event. But the legal foundation is being laid for AI litigation that could resemble past mass torts in structure and perhaps even in scale. For casualty insurers, the prudent path forward involves treating AI like any other emerging risk: one that develops incrementally, becomes clearer with time, and can be priced accordingly with the right tools and models.
AI as ‘Normal Technology’
In their 2024 book AI Snake Oil, Princeton researchers Arvind Narayanan and Sayash Kapoor caution against viewing AI as a superhuman force poised to upend society overnight. Instead, they frame it as “normal technology” that is capable of transformation but bound by the slow churn of adoption, regulation and unintended consequence.
This perspective is both comforting and instructive for insurers. It allows the industry to rely on familiar mechanisms like scientific validation, regulatory signals and judicial precedent to assess and respond to AI-related risks. It encourages investment in forward-looking analytics that incorporate early signals from litigation, science and policy. And it reinforces the role of tort law as a safeguard against externalities that technology companies may overlook in their pursuit of innovation.
As AI continues to advance, the task for insurers is not just to follow the litigation but to anticipate it. That means engaging deeply with how these technologies are being built, marketed and used. It means investing in exposure-based models that can map legal liability at the entity level, before it crystallizes into a claim. And it means recognizing that “normal technology” is anything but simple when the stakes involve human safety, legal precedent and reputational risk.
The future of AI may not be apocalyptic, but it will be litigated. The insurers best prepared for that future will be those who start underwriting it today.



Viewpoint: Agentic AI Is Coming to Insurance Industry – Much Faster Than You Think
Demystifying the Data Landscape: Lake, Warehouse and Lakehouse Explained
Why Insurance Telematics Integrations Fail
How One MGU Grew Fivefold When Capacity Fled Cat-Prone Property Markets 
