California passes first AI safety law in the United States

Californian Governor Gavin Newsom signed the “Transparency in Frontier Artificial Intelligence Act” (SB 53) at the end of September 2025. The regulation requires developers of particularly powerful AI models to disclose safety measures and report incidents in the future. The goal is to monitor the risks of so-called “Frontier AI” systems — AI models whose computing power and capabilities go far beyond previous applications.
The law is considered a significant step in the U.S. debate on AI regulation. While the European Union has already adopted a comprehensive regulatory framework through its AI Act, there is still no federal regulation in the United States. California, home to many leading technology companies, aims to take a pioneering role with the Transparency in Frontier Artificial Intelligence Act, SB 53 — albeit in a more limited form compared to earlier proposals.
Compromise after a controversial predecessor
The new legal framework is a revised version of an older, much stricter bill. The predecessor SB 1047 was passed by the California legislature in 2024 but vetoed by Governor Newsom. He argued that the proposed safety audits and liability rules could slow innovation and overburden start-ups.
After the veto, the government established a working group of representatives from academia, politics, and industry. Its mandate was to develop a middle ground between innovation freedom and public safety. The result was the newly adopted SB 53, which is scheduled to take effect on January 1, 2026.
The new bill focuses on transparency and accountability but avoids direct interference in the development process. Companies are not required to disclose which training data or algorithms they use. Instead, they must document how they identify and manage risks.
Documentation requirements instead of technical oversight
The law applies exclusively to large AI developers operating so-called “Frontier Models.” These are base models whose training requires more than 10²⁶ computational operations — a level currently reached by only a few companies. The stricter obligations apply only to “Large Frontier Developers,” i.e., firms with an annual revenue exceeding 500 million U.S. dollars, including affiliated companies.
Developers of these models must regularly submit safety reports to California’s regulatory authorities. These reports must specify which testing and protection mechanisms were applied during development and training. Any safety-related incidents must also be reported within 15 days.
The data remains confidential and will not be made public. This provision is intended to prevent the disclosure of trade secrets. The focus is on regulatory traceability: authorities should be able to verify whether companies can identify threats such as cyber misuse, disinformation, or uncontrolled self-improvement of their models at an early stage.
Protection against “catastrophic risks”
Legislators introduced SB 53 in response to concerns that particularly powerful AI systems could one day act autonomously or be used for dangerous purposes. Scenarios under discussion include automated cyberattacks, biotechnological applications, or coordinated disinformation campaigns.
The law aims to create an early warning system to detect such risks before they cause harm. It does not address ethical or data protection issues but focuses on systemic and security-related threats. In this respect, SB 53 differs significantly from previous U.S. AI debates, which have often been economically driven.
Parallels and differences to the EU Act
With its adoption, California moves closer to European-style regulation. The EU AI Act, adopted in May 2024, governs the use of artificial intelligence across the European Union and will be phased in starting in 2026.
Both laws share the same goal: greater transparency and accountability in the use of AI. In both cases, developers must demonstrate that they can identify and mitigate risks. Both the EU AI Act and SB 53 treat large foundational models as a separate risk category.
However, the approaches differ considerably. While the EU regulation covers all AI applications — from facial recognition to chatbots — and mandates conformity assessments, California’s SB 53 focuses exclusively on high-performance models. The EU also requires compliance certification and imposes fines of up to seven percent of global turnover for violations. The California law, by contrast, relies on self-regulation under supervision; specific penalty levels have yet to be determined.
California’s approach is therefore more selective and business-friendly but also less comprehensive.
Criticism from both sides
The law has received mixed reactions. Representatives of large technology companies praise the pragmatic approach but warn of additional bureaucracy and unclear implementation details. The requirement to provide safety and risk analyses, they say, will necessitate new internal structures for many firms.
At the same time, critics from academia and civil society point out that SB 53 applies only to very large companies. The regulation takes effect only when a developer exceeds 500 million U.S. dollars in annual revenue and trains a model requiring more than 10²⁶ computational operations. This means that only a few corporations — such as OpenAI, Google DeepMind, Anthropic, or Meta — are covered.
Smaller providers and open-source projects remain unregulated. Some experts view this as a loophole: risky models could also emerge outside the major labs without being subject to legal obligations. Others argue that this narrow definition is necessary to avoid stifling innovation among medium-sized firms.
The absence of independent audits is also criticized. Authorities must rely on company-provided reports; external audits are not required. Critics fear that this could limit the actual effectiveness of oversight.
Conclusion
With the “Transparency in Frontier Artificial Intelligence Act,” California is setting a precedent in international AI regulation. The law represents a compromise between economic freedom and public safety — more limited, yet forward-looking. It obliges major AI developers to provide transparency without interfering with their technology, marking the beginning of a new phase in the global race for safe artificial intelligence.