FB pixel

Draft Trump executive order signals new battle ahead over state AI powers

Courts likely would be forced to decide whether AI regulation is a matter of national coherence, or a domain where states retain sovereignty
Categories Biometric R&D  |  Biometrics News
Draft Trump executive order signals new battle ahead over state AI powers
 

As Biometric Update has reported would likely happen at some point, the Trump administration has prepared a sweeping new “Deliberative Predecisional Draft” executive order (EO) that, if signed by President Trump, would dramatically reshape how the U.S. regulates AI, setting off a new confrontation over the balance of state and federal power.

Indeed, the draft EO marks the opening of a new regulatory clash that may define U.S. AI policy for years to come. The leaked six-page draft executive order lays out an ambitious plan to prevent states from enacting their own AI rules and to centralize regulatory authority within federal agencies.

Though still only a draft, the document signals the clearest intent yet by the administration to dismantle state-level AI governance and to replace it with a single national framework aligned with its broader strategy to accelerate AI innovation.

The EO begins by declaring that the administration’s approach to AI leadership requires removing “barriers to American AI leadership” and eliminating what it calls a “complex and burdensome” patchwork of state-level rules.

One the most consequential actions of Trump’s EO, if signed, would be the explicit revocation of the January 23, 2025, executive order that was signed by his predecessor, President Joe Biden, that was intended to strengthen U.S. AI safety and oversight.

By eliminating that federal framework, the Trump White House positions itself not simply as preempting state authority, but also as reversing its immediate federal predecessor’s regulatory approach.

The draft EO further states that the U.S. must sustain AI leadership through a “balanced, minimal regulatory environment,” language that signals a clear ideological orientation against safety-first or rights-protective models of AI governance.

The administration wants the Department of Justice to challenge state AI laws it views as obstructive; the Department of Commerce to catalogue and publicly criticize state statutes deemed “burdensome;” and agencies like the Federal Communications Commission (FCC) and Federal Trade Commission (FTC) to establish national standards that would override state requirements.

The proposed EO envisions aggressive litigation and regulatory actions to weaken or nullify state-level AI rules, including those enacted in the past year in California and Colorado.

It attacks Colorado’s algorithmic accountability framework in unusually ideological terms, complaining that it may force AI models to embed “DEI in their programming” or adjust outputs to avoid “differential treatment or impact” across demographic groups.

That language reflects a cultural-political framing that goes far beyond typical administrative legal reasoning.

The EO’s language describes recent state efforts as a “patchwork regulatory framework” that slows innovation and allows the most restrictive states to dictate national policy, asserting that American AI companies must be free to innovate “without cumbersome regulation.”

To enforce that vision, the EO lays out a hierarchical structure. A new Justice Department “AI Litigation Task Force” would consult directly with the White House’s Special Advisor for AI and Crypto, the Assistant to the President for Science and Technology, the Assistant to the President for Economic Policy, and the White House Counsel.

That governance design ensures that litigation against states is not simply a DOJ initiative but a White House–directed campaign.

The move immediately raises questions not only about the future of AI governance but also about the structure of American federalism. For years, states have been the primary actors experimenting with AI regulation. They have advanced bills aimed at biometric privacy, algorithmic fairness, deepfake disclosure, automated decision-making transparency, and even restrictions on government use of facial recognition.

These experiments, often more aggressive than anything contemplated in Congress, have become the country’s de facto laboratories of AI oversight. If the draft EO becomes policy, those laboratories could be shuttered, their influence sharply reduced, and their policies exposed to direct federal attack.

That possibility is already alarming state officials, privacy advocates, and civil liberties attorneys. At the center of their concern is the idea that state-level innovation is not simply a matter of policy preference but a fundamental check on federal inertia.

Many states have adopted biometric or algorithmic protections precisely because Congress had failed to act. They created privacy safeguards that federal agencies lacked. And they imposed accuracy and fairness standards that covered local law enforcement systems, even where federal policy remained undefined.

A federal assertion of preemption threatens to freeze that process, leaving states unable to respond to harms and forcing local governments to rely on federal agencies that have historically been slow or unwilling to intervene.

The administration insists the opposite is true. Its argument is that AI must be regulated nationally to maintain American competitiveness and to avoid a maze of conflicting state rules that burden companies building and deploying AI models.

It frames state laws as impediments to innovation, claiming that differing requirements in fifty jurisdictions generate compliance chaos and undermine the nation’s ability to compete with China and Europe.

The draft EO goes further by arguing that AI systems should be free from state-imposed “alteration of truthful outputs,” language that resurrects a First Amendment theory the administration deploys throughout the document.

In that framing, state laws requiring transparency, bias testing, model disclosures, or accuracy reporting may be seen as violating constitutional protections for AI model outputs, an argument that legal scholars consider both novel and deeply contested.

But the simplicity of the administration’s argument belies its constitutional complexities. The President cannot unilaterally preempt state laws unless Congress has provided statutory authority. Courts have repeatedly held that only Congress can displace state police powers in areas like consumer protection, privacy, and public safety.

The draft order attempts to sidestep those legal barriers by instructing the Justice Department to challenge state AI laws through case-by-case litigation rather than through explicit preemption. The EO lays out a template for portraying state AI regulations as unconstitutional, preempted, or in violation of federal commerce powers.

Even conservative legal scholars argue that this approach stretches presidential authority and that the EO may have limited binding effect without court victories.

Congress complicates matters further. Earlier in the year, the Senate considered – but overwhelmingly rejected – a proposal to impose a ten-year freeze on significant state AI regulation. That moratorium failed with bipartisan resistance, demonstrating that lawmakers remain deeply protective of state authority in this domain.

The Trump administration’s draft EO attempts to revive the moratorium’s effect through executive action, but risks colliding with the same bipartisan skepticism.

Under the draft order, state AI laws could be labeled “onerous,” enabling federal agencies to target them directly. The EO empowers the FCC to initiate a federal reporting and disclosure standard meant to displace conflicting state requirements, while directing the FTC to identify state laws that allegedly require “false,” “deceptive,” or compelled AI model outputs.

This framing allows the FTC to classify such state laws as inconsistent with the FTC Act’s prohibitions on unfair or deceptive practices. States that maintain their laws could face federal litigation, regulatory pressure, or loss of federal funds.

The funding mechanism is one of the most aggressive features of the EO. Within 90 days, the Commerce Department must issue a policy notice specifying conditions under which states may remain eligible for BEAD broadband funding.

BEAD is a $42.45 billion federal program designed to expand high-speed Internet access across the U.S. and the largest broadband investment in U.S. history.

The draft EO weaponizes BEAD as a pressure point against states with AI laws the administration dislikes. Specifically, the EO directs the Commerce Department to determine whether states should remain eligible for BEAD funding only if they refrain from enforcing “onerous” AI laws. States with such laws must either repeal them or sign a binding agreement promising not to enforce them during any year they receive BEAD money.

This binding agreement represents a significant escalation in federal attempts to influence state policymaking.

As the draft order continues to circulate, uncertainty grows over its final form. A White House official, responding to reports of its contents, said that speculation over unreleased orders does not reflect official policy. But federal agencies are already preparing for the possibility that the directive could be signed at any moment.

Departments are assessing their authorities. State attorneys general are preparing litigation strategies. Industry groups are lobbying aggressively in favor of federal uniformity, and civil liberties groups warn that the order could block many of the strongest checks on AI-driven surveillance.

The order’s most immediate consequences would likely unfold through litigation. The Justice Department’s proposed task force would begin filing challenges to state laws within weeks. Commerce would issue reports identifying state statutes deemed obstructive. The FCC and FTC would launch rulemaking processes to establish federal AI disclosure and reporting standards that would operate as de facto national preemption.

States would respond with lawsuits arguing that the executive overstepped its authority. Courts would be forced to decide whether AI regulation is a matter of national coherence or a domain where states retain sovereignty.

The broader political dynamics are equally uncertain. Red states may align with the administration’s push to limit regulation, while blue states may frame the order as a constitutional overreach.

But even conservative states like Texas have expressed reluctance to relinquish regulatory authority to Washington. The Responsible Artificial Intelligence Governance Act, signed by Gov. Greg Abbott in June and set to go into effect January 1, underscores the bipartisan desire across states to shape AI policy independently.

Trump’s EO would effectively freeze the Texas law, at least temporarily, pending inevitable legal challenges.

As seen with environmental regulation, privacy law, and public safety rules, state-level sovereignty often cuts across partisan lines. The administration could therefore find itself fighting not only blue states but also a coalition of states resistant to federal encroachment.

The unresolved question is what a centralized federal AI framework would ultimately look like. The draft order gives few specifics. It emphasizes innovation, national security, and global competitiveness, but says little about civil liberties, biometric safeguards, algorithmic discrimination, or due-process rights.

Its most concrete instruction is to require the administration’s AI, crypto, and technology advisors to jointly prepare a legislative recommendation establishing a uniform federal regulatory framework for AI – effectively a roadmap for Congressional action.

Related Posts

Article Topics

 |   |   |   | 

Latest Biometrics News

 

Center for DPI unveils framework for AI-ready nations

The Center for Digital Public Infrastructure has published a paper setting out what it considers a vision for “Building AI-Ready…

 

Scientists develop ways to camouflage heart rate from radar-based surveillance

Researchers at Rice University in Houston, Texas have demonstrated a new technique that can hide – or even fabricate –…

 

South Korea publishes investigation results into Coupang data breach

A government investigation into South Korean e-commerce giant Coupang has concluded that the company’s lax management of its user authentication…

 

Emerging biometrics and PAD concerns, VCs front and center as MOSIP evolves

Biometrics and innovations in digital identity technology, most notably verifiable credentials, have taken the spotlight in many sessions of MOSIP…

 

Romance scams empty the bank account – and rip out the heart

It’s almost Valentine’s Day. For the lucky ones, that means Cupid is afoot. But in the age of generative AI,…

 

iProov becomes first vendor to achieve Ingenium Level 4, CEN/TS 18099 Level High

An announcement from iProov says its Dynamic Liveness technology is the “first and only solution to successfully achieve an Ingenium…

Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Biometric Market Analysis and Buyer's Guides

Most Viewed This Week

Featured Company

Biometrics Insight, Opinion

Digital ID In-Depth

Biometrics White Papers

Biometrics Events