Nearly three-quarters of the S&P 500 (72 percent) now report AI as a material risk in their public disclosures, according to a new study by think tank The Conference Board and data mining and analytics firm ESGAUGE.

The findings are based on Form 10-K filings from S&P 500 companies through August 15, 2025.

Up nearly 12 percent since 2023, the increase highlights “how rapidly AI has moved from experimental pilots to business-critical systems, and how urgently boards and executives are bracing for reputational, regulatory, and operational risks,” according to the report.

Reputational risk tops the list, cited by 38 percent of companies. Failed AI projects, missteps in consumer-facing tools, or a breakdown in service can quickly erode brand trust.

Cybersecurity risks follow, disclosed by 20 percent of firms.

AI technology is expanding the areas in which a company can face attacks, while also arming adversaries with more sophisticated tools for attacks, the report noted.

Disclosure Trends

Public company disclosure of AI as a material risk has surged within the past two years. Between 2023 and 2025, the share of companies reporting AI-related risks jumped from 12 percent to 72 percent.

Financials, healthcare, and industrials have seen the sharpest rise.

From 2023 to 2025, the number of companies disclosing AI-related risks jumped in financials (from 14 to 63 companies), healthcare (from 5 to 47), and industrials (from 8 to 48).

These particular sectors face regulatory and reputational risks tied to sensitive data and fairness, while industrials are scaling automation and robotics, the report stated.

Reputational Risks

Implementation failures, consumer-facing mistakes, and privacy breaches are the leading sources of reputational risk.

Reputational risk is the most frequently cited AI concern, disclosed by 38 percent of companies in 2025.

Implementation & adoption (45 companies): Reputational fallout may follow if AI projects fail to deliver promised outcomes, are poorly integrated, or are perceived as ineffective.

Consumer-facing AI (42 companies): Missteps—such as errors, inappropriate responses, or service breakdowns—are considered highly damaging, particularly for consumer-oriented brands.

Privacy and data protection (24 companies): Mishandling sensitive information is flagged as a reputational hazard; breaches can spark regulatory action and public backlash.

“Reputational risk is proving to be the most immediate and visible threat from AI adoption. One lapse—an unsafe output, a biased decision, or a failed rollout—can spread rapidly, driving customer backlash, investor skepticism, and regulatory scrutiny in ways that traditional failures rarely do,” said Brian Campbell, leader of The Conference Board Governance & Sustainability Center.

Cybersecurity Risks

Cybersecurity risk tied to AI was cited by 20 percent of firms in both 2024 and 2025.

Companies stated that AI both enlarges attack surfaces—through new data flows, tools, and systems—and strengthens adversaries by enabling more sophisticated, scalable attacks.

AI-amplified cyber risk (40 companies): Disclosures describe AI as a force multiplier, escalating the scale, sophistication, and unpredictability of cyberattacks.

Third-party and vendor exposure (18 companies): Disclosures highlight vulnerabilities arising from reliance on cloud providers, SaaS platforms, and external partners.

Data breaches and unauthorized access (17 companies): Breaches are a central concern, with firms emphasizing how AI-driven attacks can expose sensitive customer and business data.

Legal & Regulatory Risks

Legal and regulatory risk stands out as one of the most persistent themes in AI reporting, the study found.

“Unlike reputational or cybersecurity risks, which can manifest quickly, legal risk is framed as a longer-tail governance challenge that can lead to protracted litigation, regulatory penalties, and reputational harm,” the report stated.

Evolving regulation (41 companies): Firms cite difficulty in planning AI deployments amid fragmented and shifting rules. For example, the EU AI Act is frequently flagged for its strict requirements on high-risk systems, conformity assessments, and non-compliance penalties, the report noted.

Compliance and enforcement (12 companies): Many disclosures warn that new AI-specific rules will bring heightened compliance obligations and potential enforcement actions.

Cross-cutting legal risks (6 companies): Filings highlight uncertainty over how courts will treat IP claims tied to AI training data or who bears liability when autonomous AI systems cause harm.

Emerging Risks

Intellectual property, privacy, and adoption risks are also surfacing in company disclosures.

The risks reflect unsettled legal environments as well as strategic uncertainty around business models, customer relationships, and long-term competitiveness, the report stated.

Intellectual property (24 companies): Firms highlight risks spanning copyright disputes, trade-secret theft, and contested use of third-party data for model training.

Privacy (13 companies): Firms warn of sensitive exposure under the General Data Protection Regulation, Health Insurance Portability and Accountability Act, and California privacy laws (CCPA/CPRA).

Technology adoption (8 companies): Several firms point to risks in execution—high costs of new platforms, uncertain scalability, and the possibility of under-delivering on promised returns.

“We’re seeing a clear theme emerging across disclosures. Companies are concerned about the impact of AI on their reputation, security, and compliance. The task for business leaders is to integrate AI into governance with the same rigor as finance and operations, while communicating clearly to maintain stakeholder confidence,” said Andrew Jones, author of the report and principal researcher at The Conference Board.