States push ahead on AI regulation as Congress falters

This month, the U.S. Senate voted to strike down a controversial provision in President Trump’s sweeping tax and spending package – officially the One Big Beautiful Bill Act – that would have blocked states and localities from regulating AI for the next ten years. The measure had ignited bipartisan backlash and caused critics to argue that it would strip states of the ability to address harms posed by algorithmic discrimination, surveillance abuses, and AI-driven fraud.
The revival of state action comes amid growing frustration over the lack of a unified federal AI framework. While the business community has lobbied for preemptive national legislation to avoid regulatory fragmentation, many states view themselves as essential watchdogs in the absence of timely congressional oversight. Meanwhile, industry groups continue to press Washington to leave room for innovation, especially in sectors using AI for fraud detection and efficiency gains.
With the moratorium dead, more than 1,000 AI-related bills have surged back into legislative play across all 50 states, Puerto Rico, the Virgin Islands, and Washington, D.C. These measures span a wide range of issues, including biometric data protection, algorithmic transparency, and restrictions on AI tools used in hiring, criminal justice, and education. Lawmakers in California, Colorado, New York, and Texas are crafting frameworks that impose risk assessments, mandate human oversight, and prohibit AI applications deemed discriminatory or unsafe.
The Trump administration has tended to favor innovation and industry self-regulation over the imposition of federal policies and legislative mandates. David Sacks, an Internet technology investor and chair of Trump’s Council of Advisors on Science and Technology, has put forth a plan for government AI that closely reflects what the AI industry is lobbying for. For now, though, the collapse of the moratorium has reaffirmed a decentralized model of AI governance in the U.S.
As Congress weighs its next steps, states are forging ahead and setting the tone for how algorithmic accountability and AI safety will be handled on the ground. This decentralization has sparked both enthusiasm and concern. Proponents argue that states are better positioned to craft laws that reflect local values and needs.
Critics, on the other hand, worry that the growing complexity of a fragmented regulatory landscape will impose heavy compliance burdens on businesses operating across state lines. Legal experts point to overlapping or contradictory requirements that could stifle innovation and increase legal uncertainty for developers.
In the absence of a national framework, state legislatures continue to lead the charge. Despite regional differences, many of the legislative efforts share common principles. Most emphasize transparency, accountability, and harm prevention. Common provisions include mandatory disclosures when AI is used in hiring, lending, housing, or criminal justice; bans or restrictions on high-risk applications like facial recognition; and requirements for human oversight in automated decision-making.
States such as California, Colorado, New York, and Utah have also incorporated risk assessments and bias mitigation protocols into their laws, signaling a growing consensus on the need for ethical AI governance. California’s regulations build on the California Consumer Privacy Act, introducing rules around automated decision-making technologies. Colorado’s AI Act mandates safeguards for high-risk AI systems that affect access to essential services. New York now requires public agencies to disclose their use of AI tools and mandates regular bias assessments.
In states like Kentucky and Maryland, lawmakers are targeting healthcare-related AI and biometric data protections. Meanwhile, states like Texas and Montana have moved to regulate AI use in public safety and criminal sentencing contexts.
One of the most significant developments expected in the coming months is the role of the Senate Committee on Commerce in shaping federal AI oversight. A major focus of attention is Senate Bill 1290, the Artificial Intelligence and Critical Technology Workforce Framework Act, introduced by Senator Gary Peters (D-MI) with bipartisan support.
The bill aims to strengthen the U.S. workforce in AI and other critical technologies, and tasks the National Institute of Standards and Technology (NIST) with developing national workforce frameworks for AI and critical technologies that define roles, skills, and knowledge needed for these jobs, building on the existing NICE framework for cybersecurity.
The NICE framework establishes a common lexicon that describes cybersecurity work and workers regardless of where or for whom the work is performed.
“As the artificial intelligence sector continues to grow and play an increasingly important role in everything from health care to finance to agriculture, it’s crucial that we have a highly skilled workforce ready to drive innovation and keep the United States at the forefront of this industry,” Sen. Peters said.
S.1290 is seen as part of a broader effort to align national security, economic competitiveness, and educational pipelines with AI governance. It has the support of the Information Technology Industry Council, Information Systems Audit and Control Association, and Americans for Responsible Innovation.
The bill is expected to receive a dedicated hearing by the Senate Commerce Committee in the coming weeks. Chaired by Republican Sen. Ted Cruz, the committee has already held hearings on AI innovation, competitiveness, and regulatory streamlining.
The anticipated S.1290 hearing is likely to be paired with discussion about the Center for AI Standards and Innovation (CAISI), known as the AI Safety Institute under the Biden administration. CAISI was rebranded under Trump Commerce Secretary Howard Lutnick and now operates as the “primary point of contact” for evaluating AI systems, establishing voluntary standards, and enhancing coordination with federal agencies and private developers.
CAISI now collaborates closely with NIST to develop best practices and testing frameworks and assist tech firms working through the standards process. Lawmakers are considering expanding CAISI’s authority beyond voluntary guidelines to include incident reporting, risk evaluation, and audit authority.
If S.1290 passes, it would not only formalize NIST’s role in workforce development, but it could also institutionalize CAISI’s place in national AI governance. CAISI’s growing influence, alongside the failure of the moratorium, positions it as a critical node in future regulatory efforts.
Lawmakers appear to be pursuing an incremental approach. Recent federal measures, such as the bipartisan bill to ban Chinese-developed AI in federal agencies and the passage of the Take it Down Act to combat deepfake abuse, illustrate targeted responses to specific AI threats. These narrower bills may serve as a blueprint for broader legislation down the road.
Still, significant hurdles remain. Partisan divides over data privacy, civil liberties, and government authority continue to stall negotiations. Conservatives caution against federal overreach that could choke innovation, while progressives push for equity and rights-based protections. The failure of the moratorium has only amplified these ideological splits.
In the meantime, the states have become de facto laboratories for AI regulation. Their legislative frameworks are already shaping how companies design, deploy, and govern their technologies. Whether through mandatory algorithmic audits, disclosure requirements, or bans on deceptive applications, states are setting the tone for AI accountability.
Amid this rapidly evolving landscape, the business community has ramped up efforts to pressure Congress into enacting a unified national AI framework. Major tech firms, trade associations, and cross-industry coalitions argue that without a coherent federal standard, the patchwork of state laws will hinder innovation and complicate nationwide deployment.
Their message to lawmakers is clear: a national framework is essential not only for regulatory clarity but also for maintaining U.S. competitiveness in the global AI race. Industry advocates are lobbying for legislation that balances innovation-friendly guardrails with meaningful accountability, drawing comparisons to Europe’s AI Act but urging a uniquely American approach.
Tech companies, including Microsoft, Google, Meta, Amazon, Nvidia, OpenAI, and Anthropic, are calling for national guardrails that preempt state laws and reduce compliance overhead as they intensify their lobbying efforts for federal legislation, arguing that a national framework is essential to reduce compliance burdens and preserve U.S. competitiveness. Payment processors and financial institutions have joined the chorus, warning that state-level restrictions could interfere with fraud detection systems powered by AI.
Politico reported that OpenAI and Anthropic “are now adding Washington staff, ramping up their lobbying spending and chasing contracts from the estimated $75 billion federal IT budget, a significant portion of which now focuses on AI.”
Continuing, Politico said, “Scale AI, a specialist contractor with the Pentagon and other agencies, is also planning to expand its government relations and lobbying teams,” and “in late March, the AI-focused chipmaking giant Nvidia registered its first in-house lobbyists.”
Anthropic CEO Dario Amodei has urged Congress to pass a national transparency standard for AI companies.
“They’re nurturing relationships with lots of senators and a handful of members [of the House] in Congress. It’s really important for their ambitions, their expectations of the future of AI, to have Congress involved, even if it’s only to stop us from doing anything,” said Rep. Don Beyer.
“The overarching ask is for no regulation or for light-touch regulation, and so far, they’ve gotten that,” added Doug Calidas, senior vice president of government affairs for Americans for Responsible Innovation.
Article Topics
AI | AI Safety Institute | cybersecurity | legislation | regulation | responsible AI | standards | U.S. Government







Comments