FB pixel

Future of AI regulation debated in US; EU worries about AI Act enforcement

Future of AI regulation debated in US; EU worries about AI Act enforcement
 

A sweeping legislative proposal spearheaded by House Energy and Commerce Committee Chairman, Republican Brett Guthrie, has ignited a contentious debate about the future of AI regulation in the United States.

Tucked within the Republicans’ budget reconciliation bill is a provision that would impose a ten-year nationwide moratorium on state and local enforcement of any laws regulating AI systems. This proposal, if enacted, would override a growing body of state-level efforts to govern AI technologies, and could significantly shape the national AI policy landscape through 2035.

The debate over the moratorium is taking place as international AI governance efforts also struggle with their own enforcement challenges. In the European Union, the AI Act – the world’s first comprehensive legal framework for AI – also faces obstacles due to lack of funding and technical expertise among national regulators.

The language of Guthrie’s bill is explicit. It states, “no state or political subdivision thereof may enforce any law or regulation regulating artificial intelligence models, artificial intelligence systems, or automated decision systems during the ten-year period beginning on the date of the enactment of this act.”

The only exceptions are narrowly tailored to facilitate AI adoption, such as removing barriers related to licensing or permitting. The effect of such sweeping preemption would nullify existing AI regulations in states like Colorado and California, and would prevent any future legislative action at the state level aimed at curbing the risks associated with AI deployment.

This moratorium has been welcomed by major technology companies including OpenAI, Meta, and Alphabet. Industry advocates on the other hand argue that a fragmented patchwork of state regulations could stifle innovation, impose heavy compliance burdens, and hinder the United States in its competition with China for AI supremacy.

In written comments to Congress, tech policy experts warned that rushed or overly broad state laws could consolidate AI development in the hands of only the largest firms while pushing smaller innovators out of the market.

The proposal has drawn sharp criticism from consumer advocacy organizations, Democratic lawmakers, and privacy watchdogs. Representative Jan Schakowsky, the ranking member of the Subcommittee on Commerce, Manufacturing, and Trade called the moratorium “a giant gift to Big Tech” and warned that it could allow AI companies to circumvent consumer privacy protections, amplify the spread of deepfakes, and enable discriminatory profiling practices.

“The Republicans’ ten-year ban on the enforcement of state laws protecting consumers from potential dangers of new artificial intelligence systems gives Big Tech free reign to take advantage of children and families,” Schakowsky said. “This ban will allow AI companies to ignore consumer privacy protections, let deepfakes spread, and allow companies to profile and deceive consumers using AI. After stopping comprehensive national privacy from passing last year, Republicans are going after states and leaving consumers unprotected online.”

Grace Gedye, a policy analyst for Consumer Reports, said the measure would prevent states from responding to pressing AI-driven harms, including non-consensual deepfake pornography, biased hiring systems, and threats to critical infrastructure.

California Privacy Protection Agency Executive Director Tom Kemp also condemned the proposal, noting that it would derail his agency’s ongoing efforts to establish rules for automated decision-making systems. Kemp emphasized that the federal government has historically permitted states to lead on privacy protections and warned that stripping states of regulatory authority in the face of rapid technological change would undermine public trust and safety.

The proposed moratorium comes at a time when states are aggressively moving to regulate AI. Colorado passed the nation’s first comprehensive AI law in 2024 that requires developers to implement risk management and transparency protocols. California also has passed several AI-related laws dealing with consumer protection, election integrity, and voice cloning, while Utah and Tennessee have enacted targeted laws on generative AI disclosures and voice deepfakes.

In total, at least 45 states and Puerto Rico have introduced over 550 AI-related bills since the start of 2025, underscoring both the perceived urgency and complexity of AI governance.

Despite the states’ momentum, the Republican-controlled House is using the budget reconciliation process to advance its moratorium proposal. Scheduled for markup on May 13, the House Energy and Commerce Committee aims to include the provision in its formal contribution to House Continuing Resolution 14. Guthrie has defended the measure as a means of promoting American innovation and avoiding premature regulatory fragmentation, dismissing criticism from Democrats as fearmongering.

The reconciliation process, however, presents its own obstacles. Under the Senate’s Byrd Rule, provisions in a reconciliation bill must have a direct and substantial impact on federal revenue or expenditures. Legal experts question whether a blanket prohibition on state AI regulation meets this threshold. If not, the provision could be stripped during Senate review, casting doubt on its long-term viability.

Critics of the bill also raise constitutional concerns. Omer Tene, a partner at the Goodwin law firm and former privacy scholar, said broad federal moratoriums on regulation are rare and typically are confined to specific technologies like drones. He argued that the sweeping nature of the proposed AI moratorium may violate the Tenth Amendment’s anti-commandeering doctrine, referencing the Supreme Court’s decision in Murphy v. NCAA.

David Stauss of Husch Blackwell echoed Tene’s concerns, pointing out that even state laws narrowly focused on AI use in elections, healthcare, or insurance could be invalidated.

One of the more ambiguous challenges lies in how “artificial intelligence systems” will be defined under federal law. Some state statutes like Colorado use a broad definition based on the Organization for Economic Co-operation and Development framework which can include everything from firewalls to calculators unless explicitly exempted. A federal definition that is too expansive could inadvertently preempt areas of law not traditionally associated with AI, such as medical malpractice, product liability, and even civil rights protections.

This definitional ambiguity also extends to existing state privacy laws that provide consumers with rights to opt out of automated profiling. Depending on how AI-related terms are construed in the federal bill, these provisions could be nullified, eliminating key safeguards currently in place in states like California, Virginia, and Connecticut.

Guthrie’s bill not only seeks to delay state regulation, it also proposes a $500 million appropriation to the Department of Commerce to modernize federal IT systems using AI. The stated goals include improving service delivery, automating threat detection, and replacing legacy architecture. While this funding may help advance the government’s own AI capabilities, critics argue that it does little to address the risks AI poses to consumers and civil rights in the absence of robust regulatory frameworks.

The proposed ten-year moratorium on state AI regulation is more than a jurisdictional issue. It is a fundamental clash between two visions for the future of AI governance. On one side are those who believe centralized federal oversight is essential to avoid a regulatory labyrinth that could stifle growth and global competitiveness, and on the other side are lawmakers, advocates, and regulators who contend that states must be allowed to respond to the rapid deployment of AI in ways that reflect local values and protect their residents.

As the House Energy and Commerce Committee debates the measure, its implications extend well beyond the immediate legislative fight. Whether the moratorium stands or falls will signal how the U.S. plans to balance innovation with accountability in one of the most consequential technological domains of the 21st century. If Congress ultimately sides with preemption, it may also need to fill the regulatory vacuum with comprehensive federal AI legislation – something it has so far failed to deliver.

In the EU meanwhile, with many member states already under fiscal strain, some digital policy advisors have expressed skepticism that the bloc will be able to effectively enforce the rules of the AI Act in the near term. These implementation hurdles underscore the practical difficulties of AI oversight even when comprehensive legislation exists.

The AI Act establishes a comprehensive regulatory framework for AI technologies, including specific provisions addressing facial recognition systems. The act categorizes AI applications based on risk levels. Facial recognition technologies, particularly those used for biometric identification in public spaces, fall under the “unacceptable risk” category and are generally prohibited.

Specifically, the act bans real-time remote biometric identification systems such as facial recognition in publicly accessible areas. However, exceptions exist for law enforcement purposes in limited scenarios, such as searching for victims of specific crimes, preventing imminent threats, or identifying suspects of serious offenses, and these uses require prior judicial authorization.

Additionally, the AI Act prohibits the untargeted scraping of facial images from the internet or CCTV footage to create or expand facial recognition databases. This measure aims to protect individuals’ privacy and prevent unauthorized mass surveillance.

While the act sets a high bar for AI regulation, some critics argue that the exceptions for law enforcement could be exploited, potentially undermining the protections against mass surveillance. Ongoing debates continue regarding the balance between security needs and individual privacy rights in the context of AI technologies.

Related Posts

Article Topics

 |   |   |   |   |   |   | 

Latest Biometrics News

 

Social platforms ‘sufficiently high risk factor to require age verification’

Eleven countries have signed a letter, submitted to the European Commission and provided to MLex, advocating for mandatory age verification…

 

UK lays out privacy policy for One Login identity verification

Data collected for Gov.uk One Login will not be used to target advertisements or profile users, and selfie biometrics for…

 

Fortinus Global, MD Tony Smith to advise Paravision on border biometrics expansion

Paravision is ready to support national-scale identity programs with Fortinus Global as its new strategic advisor for border security systems….

 

Facebook and its 3 billion users get passkeys, Microsoft deleting passwords

Facebook is introducing passkeys as the social media platform jumps on the passkeys wagon. While Facebook might have fallen out…

 

Spain invests in chips and cybersecurity center, with digital identity firms involved

Spain is establishing a cybersecurity and microelectronics center as part of its digital transformation. The €19.6m ($22.4m) investment by Spain’s…

 

Jumio upgrades defense against deepfakes and biometric injection attacks

Deepfake and biometric injection attack detection from Jumio is now generally available with the launch of the company’s most advanced…

Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Biometric Market Analysis

Most Viewed This Week

Featured Company

Biometrics Insight, Opinion

Digital ID In-Depth

Biometrics White Papers

Biometrics Events