FB pixel

Senate Republicans press national AI framework to preempt states

Proposal would limit state regulation of AI development while preserving some state consumer protection powers, setting up a fight over who governs AI in U.S.
Senate Republicans press national AI framework to preempt states
 

Sen. Marsha Blackburn Friday released a sweeping draft federal AI framework that would override at least some state AI laws, sharpening a Senate Republican push for national preemption just as states continue advancing their own rules on deepfakes, algorithmic discrimination, and other AI harms.

The proposal, branded the Trump America AI Act, is designed to replace at least part of the growing patchwork of state AI laws with a national standard that Republicans say would promote innovation while minimizing risk.

Also on Friday, House Democrats introduced the Guaranteeing and Upholding Americans’ Right to Decide Responsible AI Laws and Standards Act (GUARDRAILS), which would “ensure states can enact commonsense safeguards to protect the American public in the face of rapidly evolving AI technologies.”

Blackburn has framed the Senate Republican discussion draft as an effort to codify President Donald Trump’s December executive order calling for a single national AI framework.

In announcing draft framework, Blackburn said Congress must create “one federal rulebook for AI” to address what she described as a fragmented state-by-state regulatory landscape that is hindering innovation.

Her office says the bill is built around protecting what it calls the “4 Cs” of children, creators, conservatives and communities, while ensuring the United States wins the global AI race.

The proposal is broader than a simple preemption measure. Under Blackburn’s draft bill, it would place a duty of care on AI developers, require covered platforms to adopt safeguards for minors, establish requirements for AI chatbot, and companion services aimed at protecting children, and sunset Section 230 of the Communications Decency Act.

That last provision would be one of the most consequential changes in the package. Section 230 generally shields online platforms from being treated as the publisher or speaker of user-generated content, while also allowing them to moderate content in good faith without automatically losing that protection.

In practice, phasing out those protections could expose platforms to significantly more litigation over user posts and, depending on how the legislation is written, over AI-generated outputs as well.

Blackburn’s plan would also create new avenues for enforcement by allowing the U.S. attorney general, state attorneys general and private plaintiffs to sue AI system developers over certain harms tied to defective design, failure to warn, and other product-liability-style theories.

On copyright, the measure takes a notably aggressive position, stating that the unauthorized reproduction, copying or processing of copyrighted works for training or fine-tuning AI models should not qualify as fair use.

It would also expand protections against unauthorized digital replicas and direct the National Institute of Standards and Technology to develop standards for provenance, watermarking and synthetic content detection.

The draft reaches well beyond child safety and copyright. Blackburn’s office says it would require third-party audits to prevent discrimination based on political affiliation and would codify Trump’s executive order against what Republicans describe as “woke AI” in federal procurement.

On economic and infrastructure issues, it would require quarterly reports on AI-related layoffs and job displacement, direct the Department of Energy (DOE) to negotiate with data center operators to protect consumers from electricity rate increases, and create an Advanced Artificial Intelligence Evaluation Program inside DOE to study frontier AI risks, including loss-of-control scenarios and possible weaponization by adversaries.

The White House’s own legislative recommendations, also released on Friday, reinforce the broader administration goal of national uniformity, but they also show that the federal approach is more nuanced than simple blanket preemption.

The administration’s National Policy Framework says Congress should establish a federal AI policy framework that avoids “a fragmented patchwork of state regulations” and should preempt state AI laws that impose “undue burdens” in favor of a minimally burdensome national standard.

At the same time, the White House says Congress should preserve state authority to enforce laws of general applicability, including laws to protect children, prevent fraud and protect consumers, and should not interfere with state zoning authority over AI infrastructure or with state decisions about their own use of AI in areas such as law enforcement and public education.

That White House framework is especially important because it lays out the administration’s theory of where state authority should end. It says states should not be permitted to regulate AI development because it is “an inherently interstate phenomenon” with foreign policy and national security implications.

It also says states should not unduly burden lawful uses of AI or penalize AI developers for unlawful conduct by third parties involving their models.

But the White House stops short of endorsing across-the-board preemption. It explicitly says Congress should not preempt states from enforcing generally applicable child-protection laws, including those addressing AI-generated child sexual abuse material.

The White House framework also adds details that are relevant to Blackburn’s broader political argument.

On children, it recommends commercially reasonable and privacy protective age-assurance requirements, such as parental attestation, for AI platforms and services likely to be accessed by minors and says Congress should require those services to reduce risks of sexual exploitation and self-harm.

On communities and infrastructure, it says Congress should protect residential ratepayers from increased electricity costs tied to new AI data center construction while also streamlining federal permitting so developers can accelerate infrastructure buildout and use on-site or behind-the-meter generation.

It also calls for strengthening law enforcement’s ability to combat AI-enabled impersonation scams targeting vulnerable populations such as seniors.

The White House recommendations diverge from Blackburn’s draft in important ways, particularly on copyright.

While Blackburn’s proposal would declare that unauthorized use of copyrighted works for AI training is not fair use, the White House says the administration believes training AI models on copyrighted material does not violate copyright law, while acknowledging that contrary arguments exist and that courts should resolve the question.

It recommends that Congress avoid interfering with the judiciary’s handling of whether training on copyrighted material constitutes fair use and instead consider licensing or collective rights frameworks that could allow rights holders to negotiate compensation from AI providers without running afoul of antitrust law.

The administration’s plan also emphasizes innovation in ways that align with, but are not identical to, Blackburn’s draft.

It says Congress should establish regulatory sandboxes for AI applications, make federal datasets accessible to industry and academia in AI-ready formats, and avoid creating a new federal AI rulemaking body, instead relying on existing regulators with subject-matter expertise and on industry-led standards.

It further calls for non-regulatory efforts to integrate AI training into education and workforce programs, expand federal study of AI-driven workforce realignment, and bolster land-grant institutions so they can provide technical assistance, launch demonstration projects, and develop AI youth programs.

Those overlaps and differences underscore the politics of what comes next. States such as Colorado, California, Utah, and Texas have moved more quickly than Congress to put AI guardrails in place, creating the very patchwork the administration and Senate Republicans say threatens national competitiveness.

Supporters of federal preemption argue that companies need a single rulebook rather than fifty conflicting ones.

Critics are likely to respond that Congress is moving first to knock out stronger state protections before it has shown it can enact a federal replacement that is equally protective and enforceable.

Rep. Don Beyer, one of the sponsors of the GUARDRAILS Act, said “the Trump White House aims to kill state AI laws without setting even minimally acceptable federal guardrails, exposing the American public to the growing risks accompanying completely unchecked artificial intelligence.”

“Until federal action ensures safe and responsible AI development, deployment, and use, states must retain the ability to implement policies to protect the American public,” Beyer said.

“Congress has the responsibility to establish a national framework for AI and any attempt by Donald Trump to create laws through executive order is a sham. It is Congress’ responsibility to check this overreach of the presidency,” added Rep. Ted Lieu.

For now, Blackburn’s proposal remains a discussion draft rather than a consensus Senate product. But together with the White House legislative recommendations, it offers the clearest picture yet of the shape a federal AI package could take if Congress moves to displace state action.

Rather than choosing between safety and innovation, the emerging Republican framework attempts to combine child protection, copyright, anti-censorship claims, workforce policy, infrastructure buildout, and national preemption into a single federal agenda.

That breadth may be part of its political appeal, but it may also be what makes the package difficult to translate into legislation that can pass both chambers and survive the conflict that is now emerging over who should control the rules for AI in the United States.

Related Posts

Article Topics

 |   |   |   |   | 

Latest Biometrics News

 

Face biometrics use cases outnumbered only by important considerations

With face biometrics now used regularly in many different sectors and areas of life, stakeholders are asking questions about a…

 

Biometric Update Podcast explores identification at scale using browser fingerprinting

“Browser fingerprinting is this idea that modern browsers are so complex.” So says Valentin Vasilyev, Chief Technology Officer of Fingerprint,…

 

Passkeys now pervasive but passwords persist in enterprise authentication

Passkeys are here; now about those passwords. Specifically, passkeys are now prevalent in the enterprise, the FIDO Alliance says, with…

 

Pornhub returns to UK, but only for iOS users who verify age with Apple

In the UK, “wanker” is not typically a term of endearment. However, the case may be different for Pornhub, which…

 

Europol operated ‘shadow’ IT systems without data safeguards: Report

Europol has operated secret data analysis platforms containing large amounts of personal information, such as identity documents, without the security…

 

EU pushes AI Act deadlines for high-risk systems, including biometrics

The EU has reached a provisional agreement on changes to the AI Act that postpone rules on high-risk AI systems,…

Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Biometric Market Analysis and Buyer's Guides

Most Viewed This Week

Featured Company

Biometrics Insight, Opinion

Digital ID In-Depth

Biometrics White Papers

Biometrics Events