US lawmaker advances new AI regulation bill amid battles over oversight

Texas Republican Senator Ted Cruz on Wednesday rolled out the Sandbox Act, a bid to give AI developers temporary regulatory breathing room while federal agencies evaluate real-world risks. It’s the lawmaker’s latest move to shape national AI policy after championing this spring’s deepfake takedown law and waging a summer fight over state authority to regulate AI.
Cruz said the bill “outlines five pillars to guide Congressional efforts on AI policy and proposes a light-touch regulatory strategy to make safe AI deployment easier in the United States while protecting against emerging risks.”
The bill would establish a federal “regulatory sandbox” allowing AI companies and users to apply for two-year waivers or modifications from certain federal rules if they submit plans to mitigate health, safety, and consumer harms.
Cruz, who chairs the Senate Committee on Commerce, Science and Technology, framed the proposal as pro-innovation and pro-competition, particularly against China, while insisting it is not a license to break the law.
“How policymakers approach the issue of regulating artificial intelligence is one of the most important questions of our time,” Cruz said. “AI is transformative. It has the potential to raise Americans’ standard of living, simplify tasks, end mindless paperwork, empower the disabled to live more independently, while also enhancing existing businesses, and creating new ones. Like the Internet, AI can extend the reach of American values around the world.”
“But make no mistake,” Cruz emphasized, “America is in an AI race against China.”
Cruz said his legislation is the first piece of a broader “AI policy framework” that he’s proposing, noting that the sandbox is a mechanism recommended by the White House’s July AI Action Plan and that any waiver must come with written accountability commitments. He also stressed that federal agencies retain oversight while participants test products in defined, risk-managed conditions.
Cruz outlined the proposal as a framework built around five priorities. He said it is designed to “unleash American innovation and long-term growth” by cutting red tape on AI infrastructure and expanding space for entrepreneurs. It also seeks to safeguard free speech in the AI era, including resisting efforts by foreign governments to censor Americans or shape domestic discourse.
Another central aim is preventing a confusing web of state-level regulations that Cruz argued would stifle development. The plan further calls for stronger protections against malicious uses of AI such as scams and fraud schemes that increasingly target seniors, and a renewed emphasis on bioethics to ensure technology advances do not come at the expense of human dignity.
“This list is not exhaustive,” Cruz said, “but it provides a foundation for debate with colleagues and the administration on how to guarantee the United States leads in AI and reaps its benefits.”
Cruz’s proposal has garnered support from notable organizations in the tech space like the Abundance Institute, U.S. Chamber of Commerce, and the Information Technology Council.
It also quickly drew fire from consumer advocates. Public Citizen warned that “by creating a federal regulatory sandbox, the proposal hands Big Tech the keys to experiment on the public while weakening oversight, undermining regulatory authority, and pressuring Congress to permanently roll back essential safeguards.”
The public advocacy group added that the legislation would “empower companies to apply for waivers from any federal regulation that touches on an AI product or service” and “automatically grant waivers from federal regulations if an agency does not act within 90 days for up to two years, with extensions possible for up to ten years.”
It would also “grant the Director of the Office of Science and Technology Policy (OSTP) the unprecedented power to override any agency’s rejection of a waiver application and approve the waiver,” the group said, and would create “a fast track that allows OSTP to recommend Congress repeal or amend existing regulations based on sandbox activity.”
In a statement announcing his new bill, Cruz said, “to be clear, a regulatory sandbox is not a free pass. People creating or using AI still have to follow the same laws as everyone else. Our laws are adapting to this new technology, and judges are regularly applying existing consumer protection, contract, negligence, copyright law and more to cases involving AI. Conduct that is illegal without AI will remain illegal with AI.”
Cruz’s turn toward a permissive, experimentation-first posture comes after a bruising, high-profile clash over whether Washington should curb state AI rulemaking.
In June, he helped lead an effort to tie a years-long moratorium on state AI enforcement to the GOP’s marquee tax-and-spending package, recasting the House’s 10-year ban into a Senate version that threatened states’ access to certain federal broadband or AI funds if they enacted their own AI rules.
The gambit drew bipartisan resistance from governors and senators alike and was ultimately stripped out in a 99–1 vote in early July, leaving the state patchwork intact for now and underscoring the political limits of preempting local oversight.
But even as that preemption push faltered, Cruz banked a rare early federal win on AI misuse. His Take it Down Act, co-sponsored with Democrat Sen. Amy Klobuchar, sailed through Congress in the spring and was signed by President Trump on May 19.
The law criminalizes the knowing publication of non-consensual intimate images, including AI-generated deepfakes, requires platforms to remove flagged content within 48 hours, and tasks the Federal Trade Commission with enforcement.
The measure passed the House 409–2 after clearing the Senate by unanimous consent, reflecting broad agreement that deepfake pornography has outpaced state and platform responses. Civil-liberties groups nonetheless cautioned that aggressive takedown mandates could chill lawful speech or pressure services to scan encrypted communications.
Taken together, the trilogy of Cruz’s legislation sketches a doctrine for governing AI by policing the most visible harms quickly; resisting overlapping layers of regulation that, in Cruz’s view could slow U.S. competitiveness; and lean on time-bounded, data-gathering trials to decide what rules are truly necessary.
Cruz is casting his sandbox as a way to “give entrepreneurs room to breathe, build, and compete” within safety guardrails while industry groups argue a national framework would avoid a maze of conflicting rules. Critics counter that waivers risk becoming de facto deregulation if deadlines and OSTP override powers tilt decisions toward approval.
Key details will determine whether the Sandbox Act is a limited test bed or a broad regulatory bypass. According to bill summaries, applicants would need to identify specific rules impeding development, propose concrete mitigation measures, and operate under written agreements for up to two years, with oversight retained by the federal government.
Opponents highlight language they say could extend relief for much longer and fear that centralizing final say at OSTP could sideline expert agencies. Supporters, pointing to the administration’s own action plan, describe the approach as a structured way to generate evidence on risk and benefit before locking in permanent rules, especially in fast-moving domains like frontier models, synthetic media detection, or autonomous systems.
The politics of all this remain fluid. Cruz, newly ascendant at the commerce committee, has positioned himself as a fulcrum between a White House eager to accelerate AI-era infrastructure and an emboldened coalition of state lawmakers pressing ahead with their own safeguards.
After the Senate nixed the moratorium, multiple states resumed work on licensing and watermarking requirements, while federal committees shifted attention to liability, privacy, and national-security questions around model deployment.
Whether the Sandbox Act gains traction could hinge on how it addresses three unresolved issues: the boundary of waiverable rules in consumer-protection statutes; transparency to outside researchers and affected communities during sandbox trials; and the balance of power between OSTP, expert agencies, and Congress in green-lighting or vetoing experiments.
Early reaction, split along familiar lines, suggests another closely fought debate.
Cruz’s office portrays the measure as complementary to the deepfake law, punishing clear abuses while letting lower-risk innovation advance under watch.
For now, the Sandbox Act gives Washington a new test of whether AI oversight can be both fast and careful. Its fate will reveal how much latitude Congress is willing to grant industry to “move fast” inside government-built guardrails, and how much trust lawmakers place in OSTP to referee disputes across the federal alphabet soup, especially after the Senate forcefully reasserted state prerogatives just weeks ago.
“It’s unconscionable to risk the American public’s safety to enrich AI companies that are already collectively worth trillions,” Public Citizen said. “The sob stories of AI companies being ‘held back’ by regulation are simply not true and the record company valuations show it.”
“Lawmakers should stand with the public, not corporate lobbyists, and slam the brakes on this reckless proposal,” the group added, noting that “Congress should focus on legislation that delivers real accountability, transparency, and consumer protection in the age of AI.”
Article Topics
AI | research and development | Sandbox Act | U.S. AI policy | U.S. Government | United States






Comments