California Pushes Forward with Scaled-Back AI Safety Bill After Previous Setback
California State Senator Scott Wiener is making another attempt to regulate artificial intelligence safety, introducing a more measured approach following the controversial defeat of his previous legislation. The new bill, SB 53, strips away the most contentious elements of last year’s failed SB 1047 while preserving key transparency requirements that could reshape how major AI companies operate.
The updated legislation focuses on three primary areas: mandating safety reports from the world’s largest AI developers, establishing robust whistleblower protections for employees at AI laboratories, and creating a public computing cluster to democratize access to AI research resources.
Transparency Requirements Take Center Stage
Under the amended SB 53, companies like OpenAI, Google, Anthropic, and xAI would be required to publish detailed safety and security protocols for their most advanced AI systems. The bill also mandates that these companies issue public reports whenever safety incidents occur during development or deployment.
This represents a significant departure from the current voluntary approach, where companies inconsistently publish safety documentation. Google recently drew criticism for failing to release a safety report for its Gemini 2.5 Pro model until months after launch, while OpenAI chose not to publish safety analysis for its GPT-4.1 model altogether.
“Having companies explain to the public and government what measures they’re taking to address these risks feels like a bare minimum, reasonable step to take,” said Nathan Calvin, VP of State Affairs for the nonprofit AI safety group Encode.
The transparency requirements emerged directly from recommendations by California’s AI policy group, which Governor Gavin Newsom assembled after vetoing SB 1047. The group, led by Stanford researcher Fei-Fei Li, emphasized the need for “requirements on industry to publish information about their systems” to establish what they called a “robust and transparent evidence environment.”
Whistleblower Protections Address Internal Concerns
SB 53 includes comprehensive protections for employees who believe their company’s AI systems pose what the bill defines as “critical risk” to society. This threshold is set at technology that could foreseeably cause death or serious injury to more than 100 people, or result in damages exceeding $1 billion.
The legislation would prevent companies from retaliating against employees who report concerning information to California’s Attorney General, federal authorities, or colleagues. Additionally, companies would be required to respond to whistleblowers about internal processes they find troubling.
These protections reflect growing concerns within the AI industry about rapid development timelines and potential safety shortcuts. The provisions aim to create formal channels for employees to voice concerns without fear of professional repercussions.
CalCompute: Democratizing AI Research
The bill establishes CalCompute, a public cloud computing cluster designed to provide researchers and startups with access to the substantial computational resources needed for AI development. A working group comprising University of California representatives and other public and private researchers would oversee the cluster’s development and determine access policies.
This initiative addresses a fundamental barrier in AI research: the enormous computational costs that often limit innovation to well-funded corporations. By providing public access to high-performance computing resources, CalCompute could enable smaller research groups and startups to compete more effectively with tech giants.
A Strategic Retreat from Previous Battles
Unlike its predecessor, SB 53 deliberately avoids making AI companies liable for potential harms caused by their systems—a provision that sparked fierce opposition from Silicon Valley in 2024. The current bill also includes specific exemptions for startups and researchers who fine-tune existing models or work with open-source AI systems.
This more targeted approach reflects lessons learned from SB 1047’s defeat. That legislation faced intense industry opposition, with critics arguing it would damage America’s competitive position in global AI development. The debate became particularly heated when some venture capitalists claimed, misleadingly according to experts, that the bill would send startup founders to prison.
Uncertain Political Landscape
The bill’s prospects remain unclear in the current political environment. While California passed 18 AI-related bills in 2024, the political momentum behind AI safety legislation appears to have waned. Vice President J.D. Vance recently signaled at the Paris AI Action Summit that the administration prioritizes AI innovation over safety regulations.
However, the bill has garnered support from unexpected quarters. Geoff Ralston, former president of Y Combinator—an organization that previously opposed SB 1047—endorsed the new legislation as “a thoughtful, well-structured example of state leadership.”
Similar legislation is emerging in other states, with New York Governor Kathy Hochul considering the RAISE Act, which would impose comparable transparency requirements on AI developers.
As SB 53 moves through California’s legislative process, beginning with the State Assembly Committee on Privacy and Consumer Protection, it represents a test case for whether more moderate approaches to AI regulation can succeed where comprehensive frameworks have failed. The outcome will likely influence how other states approach AI safety legislation and whether the industry can find common ground with regulators on basic transparency.
Sources: TechCrunch