FB pixel

AI fakery is turning fear into a voter suppression tool ahead of US elections

Deepfakes are expected to be deployed not merely to mislead voters, but to intimidate them into not voting
AI fakery is turning fear into a voter suppression tool ahead of US elections
 

In the months leading up to the 2026 midterm elections which could see Democrats sweeping both the House and Senate, the U.S. is entering a period of political vulnerability defined less by partisan persuasion than by a rapidly destabilizing information environment.

AI is now routinely used to fabricate images, videos, audio recordings, and documents that blur the line between reality and fiction, often convincingly enough to evade casual scrutiny.

The danger is not simply that voters may be misled about candidates or policy positions, but that AI-generated media is increasingly positioned to suppress participation itself by instilling fear, confusion, and mistrust around the basic act of voting.

That risk is no longer hypothetical. The killing of Renee Nicole Good and Alex Pretti by immigration enforcement agents in Minneapolis offered a stark preview of how synthetic imagery and misidentified photographs can be weaponized at scale.

Within hours of the shooting of Good and Pretti, social media platforms were saturated with AI-generated images and short video clips that purported to show events of both killings that didn’t happen.

Socia media posts circulated miscaptioned photographs falsely identified as Good, fabricated screenshots claiming to show arrest records and criminal histories, and a video clip that showed Good driving her vehicle into the Immigration and Customs Enforcement (ICE) agent who shot her when she did no such thing.

In the matter of Pretti, who was shot and killed by a Border Patrol agent, an AI-generated video and still photo falsely depicted Pretti on his knees with an agent pointing a pistol at his head. The photo was widely circulated by mainstream news media and at least one U.S. lawmaker.

More recently, fake posts circulated on Facebook showing one of the daughters of Nancy Guthrie, the abducted mother of Savannah Guthrie – the well-known co-anchor of the “Today” show on NBC – and her husband on their knees handcuffed, with a caption that said they’d been arrested for Guthrie’s murder.

The speed and volume of these visuals overwhelmed verified reporting, creating a parallel narrative ecosystem in which speculation hardened into accusation and disinformation was repurposed to morally discredit the victims.

What followed demonstrated a critical lesson for election security: AI does not need to persuade a majority of the public to be effective. It only needs to inject enough uncertainty, fear, or fabricated “evidence” to alter behavior.

As election officials and civil rights advocates prepare for the midterms, many warn that the same tactics – synthetic imagery, impersonated voices, hallucinated documentation – will be redeployed to undermine confidence in voting itself.

“We don’t really speculate on everything that could happen in the lead-up to an election. That said, with the prevalence of cheap or free deepfake creation tools, as well as the near total lack of moderation on said deepfakes across the social web, there will undoubtedly be an increase of negative attack deepfakes made of running candidates spread throughout the social sphere,” Ben Colman, co-founder and CEO of Reality Defender told Biometric Update.

“This is not a guess, but simply looking at past elections (where this happened in nearly every year for local, state, and national elections) coupled with the hockey stick-like growth of deepfakes on these social platforms,” Colman said. “Given the state of Wild West-style deepfake prevention on these platforms (read: nothing), each new election will breed more deepfakes which spread far and wide.”

And “community notes cannot keep up with this spread, meaning only a modicum of people who see these deepfakes will be informed that they are likely fakes,” Colman added. “The content of the fakes will undoubtedly be nothing good (for the most part), but as to what they depict on social platforms that run unchecked and unmoored from reality, it’s truly anyone’s guess.”

Federal agencies have been explicit about the threat. The Cybersecurity and Infrastructure Security Agency has repeatedly warned that generative AI tools lower the cost of influence operations while increasing their plausibility, allowing malicious actors to tailor deceptive content to specific communities and release it through private or semi-private channels where public corrections struggle to reach.

At the same time, the National Institute of Standards and Technology has documented the limits of detection and provenance systems, noting that watermarks and metadata are often stripped as content moves across platforms, leaving users with little visible guidance about authenticity.

What made the misinformation that followed Good and Pretti’s killing so effective was not technical sophistication, but rather human vulnerability.

The rapid spread of AI-generated photos and video succeeded because most people are far less capable of recognizing synthetic media than they assume, and because they encounter it in moments of uncertainty, before verified information has time to surface.

As the 2026 midterms approach, that same vulnerability represents one of the most serious and least addressed threats to democratic participation.

Research consistently shows that public awareness of deepfakes has not translated into reliable detection skills. Surveys by the Pew Research Center find that while most Americans have heard of AI-generated images or videos, far fewer feel confident distinguishing authentic content from fabricated material.

Even among those who express confidence, performance drops sharply when content appears on social media or arrives through private messaging, stripped of context or provenance.

Academic studies echo those findings. Work associated with the Stanford Internet Observatory and other university research centers shows that people perform only slightly better than chance when asked to identify AI-generated faces, voices, or short video clips, especially when the content aligns with their expectations or triggers a strong emotional response, sometimes referred to as “rage baiting.”

Participants routinely misidentify synthetic content as real while dismissing authentic footage as fake, a phenomenon researchers describe as the “liar’s dividend,” in which the mere possibility of manipulation corrodes trust in all information.

This vulnerability cuts across demographics. Older adults, who often rely on Facebook groups and local community pages, tend to struggle more with manipulated imagery. Younger users, despite greater digital fluency, encounter far more synthetic content on fast-moving platforms where speed and virality outweigh scrutiny.

In both cases, confidence is often misplaced. People who believe they are adept at spotting fakes generally perform no better than those who admit uncertainty.

The misinformation that followed Good and Pretti’s deaths illustrates how these dynamics operate in real time. Many of the AI-generated images and misidentified photos were not especially advanced by forensic standards. What made them powerful was timing.

Released before official details were available, they filled an information vacuum with visuals that looked like evidence. By the time corrections emerged, the images had already shaped perception and hardened narratives.

In an election context, the consequences are far greater. Voting decisions are often made under time pressure and emotional strain.

When a synthetic video appears to show violence at a polling place, or an audio clip purports to come from a local election official warning of arrests or eligibility checks, voters are not conducting media analysis. They are making immediate judgments about personal safety.

Even those who suspect manipulation may decide that staying home is the safer option.

One plausible scenario election officials are quietly preparing for involves AI-generated videos depicting chaos at polling locations in heavily minority or economically disadvantaged neighborhoods.

A short clip, circulated across social media could show armed individuals arguing outside a community center identified as a polling place, with sirens audible in the background and text warning that police have shut the site down after violence erupted.

The footage is entirely synthetic, assembled from stock video, generative imagery, and fabricated audio. By the time authorities issue a correction, turnout at that location has already dropped. Not because voters believed a political argument, but because they believed voting had become unsafe.

Another scenario relies on impersonation rather than spectacle. An AI-generated voice message, delivered in Spanish and attributed to a county elections office, warns residents that voters without “verified identification” may face questioning or delays at the polls.

The message spreads rapidly through private messaging channels in immigrant communities already wary of surveillance and enforcement. No explicit threat is made, yet the implication is unmistakable. Even voters who doubt the message’s authenticity face a calculation about risk that favors non-participation.

These tactics exploit existing anxieties and asymmetries. Synthetic media does not need to be perfect; it only needs to be plausible enough to circulate within trusted networks.

Corrections, when they arrive, are typically text-based and impersonal, rarely matching the emotional impact or reach of the original falsehood. The result is a structural advantage for fear-based disinformation, particularly in the final days before an election.

Legal and regulatory responses remain fragmented. Several states have enacted restrictions on deceptive political deepfakes, but enforcement varies widely and some measures face constitutional challenges.

Platform labeling policies and detection tools exist, but et they are inconsistently applied and often invisible to users.

Meanwhile, generative tools continue to improve, producing audio and imagery that require expert analysis to debunk – analysis that rarely travels as far or as fast as the original content.

The midterm context magnifies these risks. Unlike presidential elections, which concentrate attention on a small number of high-profile races, midterms involve hundreds of contests administered by thousands of local jurisdictions. Each county, city, and precinct becomes a potential target for hyper-local misinformation.

A fabricated notice about polling hours in a single district may go unnoticed nationally while still suppressing turnout enough to affect a close race. The experience following Good and Pretti’s killings was an early warning. It showed how AI-generated imagery, misidentification, and fabricated records can overwhelm truth before it has time to establish itself.

As the 2026 midterms approach, those same dynamics are poised to be redeployed not merely to mislead voters, but to intimidate them, turning confusion and fear into tools of disengagement.

When voters cannot tell what is real and fear that what they are seeing might be true, the safest choice can feel like staying home. That outcome, more than any single deepfake or viral lie, poses the greatest threat to democratic participation.

Related Posts

Article Topics

 |   |   |   |   |   |   |   | 

Latest Biometrics News

 

Ambitious biometrics projects need clear roles for success

Biometrics technology development has long been the fixed domain of experts, and while public bodies like NIST have played a…

 

Who holds the keys to digital sovereignty? It might not be who you think

As governments think more about digital identity as a pillar of digital public infrastructure, and therefore a matter of vital…

 

Nigeria wades into social media age assurance debate with pubic survey

A survey has been released by the Nigerian Data Protection Commission to gather feedback on the proposed regulation of a…

 

Spain’s Digital Transformation Ministry backs Sybol with €500k

A Spanish digital transformation agency is helping to fund digital identity development and verifiable credentials. The Spanish Society for Technological…

 

Ethiopia’s digital ID joins sovereign wealth fund as weekly enrollments reach 1M

Ethiopia is accelerating its efforts to reach 90 million digital ID enrollments this year, with the National ID Program (NIDP)…

 

Vendors push deeper into high assurance identity verification

Digital identity vendors are accelerating product integrations as businesses look for stronger, more seamless ways to verify users across sectors….

Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Biometric Market Analysis and Buyer's Guides

Most Viewed This Week

Featured Company

Biometrics Insight, Opinion

Digital ID In-Depth

Biometrics White Papers

Biometrics Events