Technology

Newsom signs bill targeting Google, Meta, OpenAI and Anthropic

Newsom signs bill targeting Google, Meta, OpenAI and Anthropic

A landmark law that will force major artificial intelligence companies to own up to the dangers of their cutting-edge technology was signed by Gov. Gavin Newsom on Monday, adding to California’s regulatory clout over the tech industry.
The novel law focuses on some of Silicon Valley’s largest companies and San Francisco’s buzziest startups, requiring tech giants like Meta and OpenAI to publish safety reports when they release new AI developments. It will also offer protections for whistleblowers who work at the companies, encouraging them to come forward with any information about major AI risks.
This is state Sen. Scott Wiener’s second attempt at moving the needle on safeguarding the public from potential harm from AI. The San Francisco Democrat called it an “exciting step” for responsible AI.
Advertisement
Article continues below this ad
“California has proven that we can establish regulations to protect our communities while also ensuring that the growing AI industry continues to thrive,” Newsom said in his signing statement. “AI is the new frontier in innovation, and California is not only here for it — but stands strong as a national leader by enacting the first-in-the-national frontier AI safety legislation that builds public trust as this emerging technology rapidly evolves.”
Wiener has described the law as less about liability for AI’s key developers and more about transparency. The vast majority of startups will be exempt, with the bill applying to companies that develop cutting-edge AI models and adding further requirements to those that pull in more than $500 million a year in revenue — including Meta, Google, OpenAI and Anthropic in the Bay Area. Those companies will be required to publicly share how they guard their new models against “catastrophic risk,” like helping criminals create a deadly chemical weapon or launch a billion-dollar cyberattack.
This law comes about three years after the release of ChatGPT, which launched an all-out, industrywide race to develop popular AI tools and advanced AI models. The glut of investment has poured fuel on the Bay Area’s tech industry, and put pressure on lawmakers in Sacramento.
Advertisement
Article continues below this ad
While President Donald Trump has on numerous occasions cozied up with tech leaders this year, Republicans in Washington are framing AI regulation as a blockade to the technology’s progress. In July, an attempt by Sen. Ted Cruz to place a moratorium on state-level AI regulation was thwarted, but the Republican senator has maintained his pursuit. At a recent conference, he and two allies specifically warned against California regulating AI.
“Ah, Ted Cruz,” Newsom said during a press conference after a climate bill signing in mid-September, overtly rolling his eyes, when responding to a question from SFGATE about Republicans’ reactions in Washington, D.C. “California dominates in AI for a reason. We dominate in this space. We don’t compete in this space.”
When asked about SB 53, the governor at the time did not indicate whether he supported the bill but pointed to an executive order he signed in 2023, in partnership with the Biden administration, that aimed to “balance recklessness” that would come with the innovation.
Advertisement
Article continues below this ad
Newsom has said that his team and the companies in question have collaborated to discuss possible solutions — but emphasized that tech did not have the final word. Anthropic is the only affected company that publicly endorsed SB 53, saying that it “provides a strong regulatory foundation.”
“We worked with industry, but we didn’t submit to industry. We’re not doing things to them, but we’re not necessarily doing things for them,” Newsom said during a panel with Bill Clinton in New York City last week.
The new law, while pushing forward transparency and safety protocols, could perhaps have the most impact in its protections for employees. It allows the AI developers’ workers to freely and anonymously report any potential AI risks they notice, and even requires that companies inform their employees on how to make those claims.
Whistleblowers, for years, have played a major role in shaping the public’s understanding of the tech industry. In 2021, former Facebook employee Frances Haugen disclosed thousands of pages of documents to journalists and the U.S. government, leading to a reckoning over the company’s treatment of teenagers. And just last year, a former OpenAI researcher accused his old employer of breaking copyright law, arguing for greater regulation.
Advertisement
Article continues below this ad
The new California law arrives amid a wider reckoning about the safety of AI systems, especially as top companies throw around terms like “superintelligence” and “artificial general intelligence,” stating their goals to create better-than-human AI. A California teenager’s suicide pushed the issue of flawed chatbots into the national spotlight, as SFGATE reported, and earlier this month, federal officials asked several California companies for long lists of information about their chatbots. Wiener’s new law adds a fresh layer of transparency to AI’s leading edge.
Work at a Bay Area tech company and want to talk? Contact tech reporter Stephen Council securely at stephen.council@sfgate.com or on Signal at 628-204-5452.