Senator Cruz faces obstacles in bid to preempt state AI standards

Texas Senator Ted Cruz’s bid to set federal rules for AI to head off what he calls a chaotic patchwork of state laws is colliding with a burst of state activity led now by California.
The stakes are high given that the goal of Cruz and the White House is to reign in the states. Letting California take the lead in setting standards is consequential for the federal government. The Golden State is the undisputed high-tech hub of the nation, accounting for the country’s largest technology economy and the biggest share of tech GDP.
There are only two other states behind California with broadly impactful AI legislation. The New York Responsible AI Safety and Education Act covers frontier AI risks and overlaps with California’s transparency regime. It passed the New York Assemble in June. Michigan’s AI Transparency Act also focuses on industry transparency obligations. It is pending action by the House Committee on Regulatory Reform.
After unveiling his five-pillar “Legislative Framework to Strengthen American AI Leadership” plan on September 10 and introducing his Sandbox Act, Cruz also vowed to revive the ten-year moratorium on state and local AI laws that was stripped from the GOP’s “One Big Beautiful Bill” this summer by his Republican colleagues. There have been splits within Trump’s MAGA base in Congress.
The bill is pending action by the Senate Committee on Commerce, Science, and Transportation which Cruz chairs.
The bill currently has no co-sponsors but it has garnered support from Big Tech trade groups like the Abundance Institute, U.S. Chamber of Commerce, and the Information Technology Industry Council.
At Politico’s AI & Tech Summit Tuesday, Cruz said the moratorium is “not at all dead,” underscoring his argument that fragmented state standards threaten U.S. AI competitiveness and invites compliance nightmares for developers.
After the dramatic collapse of the proposed federal moratorium on state AI regulation in July, California lawmakers saw the moment as a narrow window of opportunity to shape the framework that could guide the nation’s AI laws and pressed ahead with its own AI regulatory agenda.
In the final days of session, California lawmakers sent Gov. Gavin Newsom two signature bills, SB 53, a transparency-and-safety bill for “frontier” AI developers, and SB 243, a first-in-the-nation framework to curb risks from AI companion chatbots, particularly for minors.
SB 53 would require large model makers to publish risk frameworks and transparency reports and to notify state officials of critical safety incidents, an approach that consciously backs off prescriptive “kill switch” mandates Newsom rejected when he vetoed last year’s broader AI safety bill.
AI developer Anthropic was the first Big Tech company to endorse the bill. “With SB 53, developers can compete while ensuring they remain transparent about AI capabilities that pose risks to public safety,” Anthropic said in a statement.
“Frontier AI companies have made many voluntary commitments for safety, often without following through. This legislation takes a small but important first step toward making AI safer by making many of these voluntary commitments mandatory,” said Dan Hendrycks, executive director of the Center for AI Safety.
Industry trade groups like Consumer Technology Association and Chamber of Progress are highly critical of the bill. “SB 53 and similar bills will weaken California and U.S. leadership in AI by driving investment and jobs to states or countries with less burdensome and conflicting frameworks,” the chamber said.
Newsom has until October 12, the statutory deadline for action on end-of-session legislation, to sign or veto the bill and has not publicly commented on it. Business counsels have already flagged the bill’s narrower scope, incident-reporting timelines, and whistleblower protections as likely to pass gubernatorial scrutiny.
Authored by state Sen. Steve Padilla, SB 243 would require operators of AI companion chatbots to implement suicide-prevention protocols, restrict sexual content for minors, and disclose that the bots are not human, paired with a private right of action for families if developers don’t comply. Padilla cited suicide cases in California and Florida to justify the bill.
“This technology can be a powerful educational and research tool, but left to their own devices, the tech industry is incentivized to capture young people’s attention … at the expense of their real-world relationships,” Padilla said on the Senate floor shortly before passage by the California legislature.
Padilla said his “first-of-its-kind” legislation “would require chatbot operators to implement critical, reasonable, and attainable safeguards around interactions with artificial intelligence chatbots and provide families with a private right to pursue legal actions against noncompliant and negligent developers.”
SB 243 is supported by online safety advocacy groups and received bipartisan support throughout its journey through the state legislature.
The White House, however, is openly wary of letting Sacramento set the de facto national standard. At the Politico summit, Elon Musk pal Sriram Krishnan, Senior White House Policy Advisor on AI, bluntly stated that “we don’t want California to set the rules for AI across the country,” framing state moves as a brake on Washington’s plan to accelerate development in a race with China.
One of the authors of Trump’s AI Action Plan released in July, Krishnan favors minimal regulatory interference in private sector AI innovation and has made statements like, “let them cook,” to describe how the government should approach AI R&D and private-sector progress.
This posture aligns with Cruz’s push for federal primacy over AI and his warning that a state mosaic will slow innovation and raise barriers for smaller businesses. Cruz is an ally of the White House. So, it’s no surprise that it just so happens that the Trump administration’s mantra for regulating Big Tech is to allow Big Tech to innovate with as little government intrusion as possible.
For Cruz, his Sandbox Act is the immediate test case of his Feds vs. the States game play. The bill would let AI firms apply for “two-year waivers renewable up to a decade” from specific federal rules if they disclose and mitigate safety and consumer risks, an approach described as “federal flexibility with guardrails” to avoid a “fragmented and failing” state-by-state environment.
Cruz’s broader tactical problem is that the state activity he wants to preempt includes his own backyard. In June, Texas Gov. Greg Abbott signed the Texas Responsible Artificial Intelligence Governance Act (TRAIGA), making Texas the second state after Colorado to enact an omnibus AI law.
TRAIGA establishes prohibited practices from social-scoring by government to certain manipulative or discriminatory AI uses, updates the state’s biometric consent rules, creates a regulatory sandbox, and forms a Texas AI Council. It takes effect January 1, 2026, with civil penalties up to $200,000 for uncurable violations and Attorney General enforcement.
Sponsor Rep. Giovanni Capriglione defended the law as both pro-innovation and risk aware. “It has been a delicate balancing act in making sure that we continue to have innovation [while] At the same time there are significant potential risks to using artificial intelligence. What this bill does is it puts those together,” he said.
Texas Republican Rep. Brian Harrison warned that “hyper regulatory” approaches could cost the U.S. its edge against China.
California Democrats, for their part, argue that federal inaction has left states to fill the vacuum. Both Newsom and Padilla have defended the state’s “pioneering role” in setting responsible AI guardrails while maintaining tech leadership.
The state’s strategy this session has been to tighten the scope and focus on transparency, incident reporting, and youth protections, rather than sweeping engineering mandates.
Critics say some proposals were watered down, while business lobbies warn of friction with national deployments.
Inside Sacramento, Padilla’s office is the loudest voice in the youth-safety lane. Citing tragic cases in Florida and California where teens died by suicide after interactions with chatbots, Padilla has recruited parents and safety advocates to testify and is pushing to pair design-duty concepts with concrete escalation protocols to crisis services.
His press office highlighted broad bipartisan votes, but a coalition of tech-safety groups that had urged stronger language remains split, with some advocates claiming last-minute changes weakened the bill. The California attorney general has publicly backed the overall thrust of the state’s chatbot safety push.
Beyond chatbots, lawmakers also passed SB 53 on “frontier” developers. The bill narrows obligations to the largest firms – those above a revenue and compute threshold – and emphasizes standardized public disclosures, 15-day incident reporting (24 hours for imminent harm), whistleblower protections, and annual anonymized summaries starting in 2027.
Supporters, including Sen. Scott Wiener, cast the bill as “trust-but-verify,” a compromise they argue is likelier to clear Newsom’s desk than last year’s heavier mandates. Whether that proves true will shape how far other states go this fall.
This federal-state standoff now defines Cruz’s sales pitch. On one hand are California’s new transparency regime and youth-safety guardrails and Texas’s TRAIGA, all evidence, he argues, of the very patchwork Washington must prevent.
On the other hand, there are the lawmakers like Padilla who insist that state regulations are indispensable while Congress drags its feet.
Cruz insists he’s coordinating with the White House and says Senate rules staff have cleared his moratorium language for future legislative vehicles, but he’s also banking on the Sandbox Act to show that federal oversight can be flexible without becoming a vacuum.
Now, with AI laws already on some states’ books, the patchwork is no longer hypothetical. It’s this landscape Cruz must navigate.
Article Topics
AI | legislation | Sandbox Act | U.S. AI policy | U.S. Government | United States







Comments