Election deepfake laws spread across US ahead of 2026 midterms

Maryland has become the 30th U.S. state to enact election deepfake legislation, as lawmakers race to address AI-generated political deception ahead of the 2026 midterms.
Gov. Wes Moore signed SB 141 after the bill passed the Maryland Senate in February and the House in April. The measure was approved by the governor and takes effect June 1, 2026.
The Maryland General Assembly’s bill summary says the law requires and authorizes the state administrator of elections to act after receiving a credible report that election misinformation, election disinformation, or a deepfake has been or is being communicated, disseminated, or distributed.
It also requires the administrator to publicly communicate correct information and prohibits a person from knowingly or with reckless disregard creating, using, or disseminating a deepfake to produce materially false information.
The signing gives Maryland election deepfake protections at a time when lawmakers across the country are trying to respond to increasingly realistic synthetic audio, images, and video that can depict candidates saying or doing things they did not say or do.
Public Citizen, which tracks state election deepfake legislation, says the danger is especially acute when deceptive content is released days or hours before voting, leaving little time for campaigns, journalists, election officials, or voters to verify it.
Public Citizen, which tracks state election deepfake legislation, said Maryland’s law makes it the 30th state with election deepfake protections and argued that state legislatures have moved faster than Congress to address AI-generated election misinformation.
“State lawmakers continue to lead the charge in protecting constituents from the threat of generative AI deceiving voters and undermining elections,” said Ilana Beller, a democracy organizing manager with Public Citizen.
She said the approach is increasingly important as the 2026 midterms approach and as AI-generated videos and images become more realistic.
The laws vary by state. Some require disclaimers on AI-generated political content, while others restrict deceptive synthetic media close to elections, create civil remedies, empower election officials to issue corrections, or impose penalties for knowingly distributing false content.
The shared premise is that election-related deepfakes can distort the information environment at precisely the moment voters are making decisions.
The state-level push is unfolding alongside a growing federal debate over whether Congress should preempt state AI laws with a national framework.
Public Citizen argues that federal preemption without strong national protections would nullify state election safeguards while leaving voters exposed to AI-generated deception.
Supporters of federal preemption say a patchwork of state AI rules could create compliance burdens and slow innovation. The result is a growing conflict between state efforts to regulate election manipulation and broader federal attempts to establish uniform AI rules.
Maryland’s bill also reflects a broader concern that existing law has not kept pace with AI impersonation. In Maryland, concern over AI impersonation intensified after a 2024 case involving a synthetic audio recording that impersonated a high school principal in Pikesville.
Although that case was not an election matter, it became an example for lawmakers of how quickly synthetic media can damage reputations, confuse the public, and expose gaps in existing criminal and civil law.
The push has also drawn criticism from free speech advocates and some policy groups, which argue that poorly drafted deepfake laws can sweep too broadly, create uncertainty for satire, parody, political commentary, or news reporting, and rely on subjective judgments about what counts as deceptive or materially false.
That tension has shaped debates in several states as lawmakers try to target malicious election manipulation without restricting protected political expression.
According to Public Citizen’s legislation tracker, election deepfake laws have now been enacted in 30 states, while dozens of additional bills remain active across the country. The tracker, last updated May 13, underscores how quickly the legal landscape is changing ahead of the 2026 midterms.
For election officials, the practical challenge is immediate. Deepfakes can be produced quickly, distributed widely, and tailored to specific communities.
A synthetic robocall, fake video, fabricated candidate statement, or manipulated image can move across social media and messaging platforms before officials know it exists.
Maryland’s law gives state election officials a defined role in responding to credible reports and correcting false information, while also creating legal consequences for people who knowingly or recklessly use deepfakes to produce materially false election information.
The law will not end the use of AI in campaigns. Nor does it resolve the broader question of how far states can go in policing deceptive political speech without running into constitutional limits. But Maryland’s enactment shows that state lawmakers are increasingly treating election deepfakes as a near-term threat to voting systems, not a speculative future problem.
Article Topics
AI fraud | deepfake detection | deepfakes | elections | generative AI | legislation | United States







Comments