California Enacts AI Safety Disclosure Law Targeting “Frontier” Model Developers

California Gov. Gavin Newsom signed Senate Bill 53, the Transparency in Frontier Artificial Intelligence Act, on Monday, establishing new transparency and safety-reporting duties for companies developing the most powerful AI systems. Newsom’s office described the measure as “first‑in‑the‑nation” legislation aimed at balancing innovation with public safety.
The law requires “frontier developers,” companies that train AI foundation models above a specified computing‑power threshold, to publish safety frameworks that explain how they identify and mitigate catastrophic risks and how they incorporate national and international standards. “Large” frontier developers, defined as those with at least $500 million in annual revenue, face additional obligations and can be fined up to $1 million per violation for noncompliance, enforced by the state attorney general.
Under SB 53, a “frontier model” is one trained with more than 10^26 floating‑point or integer operations, a standard intended to capture only the most capable systems. The statute defines “catastrophic risk” as a foreseeable, material risk of more than 50 deaths or serious injuries, or more than $1 billion in property damage, including risks such as expert‑level assistance in creating CBRN weapons, major cyberattacks without meaningful human oversight, or loss of developer control.
Before releasing a new frontier model or substantially modifying an existing one, developers must post a transparency report that includes basic model information. Additionally, large frontier developers must summarize their catastrophic-risk assessments and describe any third-party evaluations. The measures take effect January 1, 2026.
The act creates incident‑reporting rules for model failures with safety implications. Frontier developers must report “critical safety incidents” to the California Governor’s Office of Emergency Services within 15 days of discovery, and within 24 hours if there is an imminent risk of death or serious injury. OES will publish anonymized annual summaries beginning in 2027.
Newsom framed the law as advancing both safety and the state’s tech economy. “This legislation strikes that balance,” he said in the signing announcement, which also highlighted new whistleblower protections and plans for “CalCompute,” a public cloud‑compute consortium to expand access to research‑grade computing.
Industry reaction was mixed. Anthropic called the framework strong and workable, while some companies and investors warned that state‑by‑state rules could create a patchwork that complicates compliance and urged Congress to set national standards.
SB 53 follows a year‑long reset after Newsom vetoed a broader AI safety bill in 2024. The new law adopts recommendations from an expert working group Newsom convened to study risks from frontier models, and it focuses on public transparency rather than pre‑approval of systems. It also directs the California Department of Technology to recommend annual updates to the law’s technical definitions as the technology evolves.
Several provisions narrow the scope of the law and protect sensitive information. Reports to OES and covered employee disclosures are exempt from the Public Records Act; companies may redact trade secrets and security‑sensitive details in public postings. The statute preempts new local ordinances that attempt to regulate frontier developers’ catastrophic‑risk management, and it will not apply where federal law preempts state action.
California officials cast the measure as part of a broader strategy to lead on responsible AI while federal legislation remains pending. Whether SB 53 becomes a national model may turn on how effectively its disclosure and incident‑reporting regime improves safety without slowing research in the state’s AI sector.
