Science

California’s newly signed AI law just gave Big Tech exactly what it wanted

California’s newly signed AI law just gave Big Tech exactly what it wanted

On Monday, California Governor Gavin Newsom signed the Transparency in Frontier Artificial Intelligence Act into law, requiring AI companies to disclose their safety practices while stopping short of mandating actual safety testing. The law requires companies with annual revenues of at least $500 million to publish safety protocols on their websites and report incidents to state authorities, but it lacks the stronger enforcement teeth of the bill Newsom vetoed last year after tech companies lobbied heavily against it.
The legislation, S.B. 53, replaces Senator Scott Wiener’s previous attempt at AI regulation, known as S.B. 1047, that would have required safety testing and “kill switches” for AI systems. Instead, the new law asks companies to describe how they incorporate “national standards, international standards, and industry-consensus best practices” into their AI development, without specifying what those standards are or requiring independent verification.
“California has proven that we can establish regulations to protect our communities while also ensuring that the growing AI industry continues to thrive,” Newsom said in a statement, though the law’s actual protective measures remain largely voluntary beyond basic reporting requirements.
According to the California state government, the state houses 32 of the world’s top 50 AI companies, and more than half of global venture capital funding for AI and machine learning startups went to Bay Area companies last year. So while the recently signed bill is state-level legislation, what happens in California AI regulation will have a much wider impact, both by legislative precedent and by affecting companies that craft AI systems used around the world.
Transparency instead of testing
Where the vetoed SB 1047 would have mandated safety testing and kill switches for AI systems, the new law focuses on disclosure. Companies must report what the state calls “potential critical safety incidents” to California’s Office of Emergency Services and provide whistleblower protections for employees who raise safety concerns. The law defines catastrophic risk narrowly as incidents potentially causing 50+ deaths or $1 billion in damage through weapons assistance, autonomous criminal acts, or loss of control. The attorney general can levy civil penalties of up to $1 million per violation for noncompliance with these reporting requirements.
The shift from mandatory safety testing to voluntary disclosure follows a year of intense lobbying. According to The New York Times, Meta and venture capital firm Andreessen Horowitz have pledged up to $200 million to two separate super PACs supporting politicians friendly to the AI industry, while companies have pushed for federal legislation that would preempt state AI rules.
The original SB 1047 had been drafted by AI safety advocates who warned about existential threats from AI drawn heavily from hypothetical scenarios and tropes from science fiction, but it met pushback from AI firms who found the requirements too vague and potential reporting burdens too onerous. The new law follows recommendations from AI experts convened by Newsom, including Stanford’s Fei-Fei Li and former California Supreme Court Justice Mariano-Florentino Cuéllar.
As with SB-1047, the new law creates CalCompute, a consortium within the Government Operations Agency to develop a public computing cluster framework. The California Department of Technology will recommend annual updates to the law, though such recommendations require no legislative action.
Senator Wiener described the law as establishing “commonsense guardrails,” and Anthropic’s co-founder Jack Clark called the law’s safeguards “practical,” though the transparency requirements likely mirror practices already standard at major AI companies and disclosure requirements without enforcement mechanisms or specific standards and may offer limited protection against potential AI harms in the long run.