Patchwork of state AI privacy laws creates confusion and uncertainty

In the absence of broad federal legislation that specifically prohibits or restricts the use of AI, the vacuum has and is being filled by a hodge-podge patchwork of legislation and legislative proposals at the state level to address the more recent digital privacy issues that the emergence of AI has brought with it. And few of the laws that have been passed by states regulating AI within their borders can be considered comprehensive.
Many of the enacted or proposed state laws that address AI target the use of AI in the processing of users’ personal data and requiring the right of users to opt-out.
It’s become a tapestry of quilted laws the U.S. Congressional Research Service (CRS) said in a recent issues for Congress brief could result in restricting cross-border data flows that interfere with some businesses ability to conduct international trade.
Consequently, the landscape of AI regulation is a mess. A mess that eventually may have to be dealt with at the federal level with legislation that standardizes AI data security and privacy laws and renders mute the individual laws that so many states have and likely will continue to enact. Under the U.S. Constitution’s Supremacy Clause, federal law is the “supreme Law of the Land” and overrides conflicting state law. Congress sometimes expressly provides that state laws on a given topic are preempted, which is known as “express preemption.”
Whether that happens remains to be seen. But as more states pile on with individual laws regulating AI privacy, creating more and more headaches for businesses engaged in commerce that necessarily involves cross-border data flows that are essential to the technologies used to digitally order and deliver goods and services, Congress may be forced to step in.
“As the United States considers what framework to potentially pursue for AI at a federal level, states have already undertaken significant legislation. In some cases, without strong federal preemption, these laws could significantly disrupt the development and deployment of AI technologies,” explained Jennifer Huddleston, a senior fellow in technology policy at the Cato Institute. “There are, however, opportunities for states to consider only intrastate applications as they relate to ensuring civil liberties or restricting or embracing the government’s use of AI. What seems certain is that states will continue to consider a wide range of policies that could impact this important technology.”
“Despite plenty of press releases, committee gatherings, and reports issued, a divided Congress has yet to produce much substance on this emerging technology. In the meantime, state lawmakers are accustomed to moving (relatively) fast and are always happy to fill any policymaking gaps left open by their colleagues at the federal level,” said Bill Kramer, Vice President of Policy at multistate.ai.
“While Congress debates what, if any, actions are needed around artificial intelligence, many states have passed or are considered their own legislation …This did not start in 2024, but it certainly accelerated … certain actions at a state level could be particularly disruptive to the development of this technology,” the Cato Institute said.
Danielle Trachtenberg, a CRS analyst in international trade and finance, said while “Congress is considering legislating in a number of areas that may shape the future of U.S. digital trade policy, including data privacy and regulation of the technology sector,” it should consider “how proposed data protection legislation might impact consumer data protection, treatment of cross-border flows of sensitive information, minimization of foreign adversaries’ access to data on U.S. citizens, and regulation of data brokers.” Congress should also “consider legislating or conducting oversight on specific data localization issues (e.g., whether or not to mandate localization of data generated by U.S. TikTok users)” and “when considering the overall digital economy, Congress [should] consider regulation or oversight of digital platforms and emerging technologies such as AI.”
Trachtenberg pointed out that “since removing support for some digital trade provisions at the World Trade Organization in 2023, the Office of the U.S. Trade Representative (USTR) has not proposed new digital trade objectives.”
Digital trade is increasingly interconnected with data policy and regulation of emerging technologies like AI and digital platforms, both of which rely on cross-border data flows.
“Cross-border data flows are essential to the technologies used to digitally order and deliver goods and services, and to many facets of the digital economy, including digital platforms,” Trachtenberg said. “Because of this, much debate on digital trade has focused on data policy and technology. Digital trade issues facing Congress include data privacy, data localization, artificial intelligence, and regulation of the technology sector.”
But, as Congress dithers on AI regulatory legislation, states are filling the void, often with disparate laws that conflict with one another, and which could eventually be in conflict with federal law.
“In the absence of comprehensive federal legislation on AI, there is now a growing patchwork of various current and proposed AI regulatory frameworks at the state and local level,” explained the St. Louis, Missouri-based Bryan Cave Leighton Paisner LLP. Even in the absence of federal legislation, the international law firm said, “it is clear that momentum for AI regulation is at an all-time high. Consequently, companies stepping into the AI stream face an uncertain regulatory environment that must be closely monitored and evaluated to understand its impact on risk and the commercial potential of proposed use cases.”
“What AI legislation seeks to regulate varies widely among the states,” Huddleston said. “For example, at least 22 states have passed laws regulating the use of deepfake images, usually in the scope of sexual or election-related deepfakes, while 11 states have passed laws requiring that corporations disclose the use of AI or collection of data for AI model training in some contexts. States are also exploring how the government can use AI. Concerningly, Colorado has passed a significant regulatory regime for many aspects of AI, while California continues to consider such a regime.”
In the 2024 legislative session, at least 40 states, Puerto Rico, the Virgin Islands and Washington, D.C. all introduced AI bills; six states, Puerto Rico, and the Virgin Islands adopted resolutions or enacted legislation; and more than 100 bills are pending. Just as many more bills failed outright or failed because of the adjournment of states’ legislatures.
Some states are pursuing a less regulatory approach to AI regulation like the 22 states that have passed laws creating some form of a task force or advisory council to study how state agencies can use or regulate AI, while others have focused on ensuring that civil liberties are protected, like the 12 states that have passed laws restricting law enforcement from using facial recognition technology or other AI-assisted algorithms.
“In the past few years, AI went from an idiosyncratic legislative interest to 150 mostly study bills in 2023 to over 600 and counting this year,” Kramer wrote, noting that “this growing pile of studies, committee transcripts, and legislative language represent not only the interest of policymakers in regulating AI but also the speed at which the technology has accelerated today. And I’m certain we’ve only scratched the surface.”
Also in the absence of federal legislation, government departments – especially the Departments of Defense, Homeland Security, and the component agencies of the U.S. Intelligence Community – are implementing their own policies for the use of AI pursuant to President Joe Biden’s October 2023 Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence.
Because of the growing concerns over the misuse or unintended consequences of AI, efforts are underway to develop standards. The National Institute of Standards and Technology has been having discussions with the public and private sectors to develop federal standards for the creation of reliable, robust, and trustworthy AI systems.
Article Topics
data privacy | digital economy | legislation | regulation | responsible AI | standards | U.S. AI policy | U.S. Government






Comments