FB pixel

Deepfake threat still stirring concern about election integrity

Fears grow around ‘cognitive hacking that changes your relationship with reality’
Deepfake threat still stirring concern about election integrity
 

The specter of deepfakes continues to hover over the various elections of 2024, and even if little actual deepfake-driven disruption has materialized, anyone who has done a Google image search recently will know how much fake content has inundated the digital world.

In response, Microsoft has lent support to an organization looking to develop an open technical standard for establishing the provenance of digital content and identifying AI-generated assets. In Brazil, the DFRLab and the Federal University of Rio de Janeiro’s NetLab UFRJ are testing “various methodologies to examine the landscape of research methods to help identify electoral deepfakes” that could impact elections. Meanwhile in California, a U.S. district judge has paused a law that allows anyone to sue for damages over election deepfakes, suggesting it might violate the Constitution.

Microsoft offers ‘Content Credentials’ to fight deepfake threat

A post by Vanessa Ho in Microsoft’s Building AI Responsibly series says that, as deepfakes become easier to create, it is committed to supporting a “more trustworthy information ecosystem with responsible AI tools and practices.”

Andrew Jenks is director of media provenance for Microsoft, and chair of the Coalition for Content Provenance and Authenticity (C2PA), which Microsoft co-founded. “The repercussions of deepfakes can be incredibly severe,” he says. “They’re a form of cognitive hacking that changes your relationship with reality and how you think about the world.”

Generative AI is spurring a rise in disinformation. Identity theft and political interference are well within the reach of fraudsters in an ecosystem in which uncredited and unsourced content circulates freely.

Microsoft, which has previously pushed for AI deepfake fraud to be made illegal, is currently previewing an application that allows creators and publishers to add “Content Credentials” to their work – or certified metadata that cryptographically attaches details such as who made the content, when it was made and whether its creation involved AI, to serve as a kind of invisible trust stamp.

“Content Credentials provide an important layer of transparency, whether or not AI was involved, to help people make more informed decisions about content they share and consume online,” Jenks says. “As it becomes easier to identify content sourcing and history, people may become more skeptical of material that lacks specific provenance information.”

Identifying deepfakes still a laborious process for Brazilian researchers

Deepfake content has been on the minds of researchers at the DFRLab and the Federal University of Rio de Janeiro’s NetLab UFRJ, as Brazilians headed to the polls this week for municipal elections. For DFRLab, Beatriz Farrugia explores “the challenges of identifying deepfakes ahead of the 2024 Brazil election” in a case study that illustrates difficulties researchers and others face in attempting to identify generative content, and looks at new regulations that “codify rules stating that any campaign content generated or edited by AI must feature a disclaimer acknowledging the use of AI.”

A thorough analysis of platform compliance in identifying deepfakes and AI generated content leads to the following conclusion: “there are limitations in available research methodologies to monitor the proliferation of AI-generated imagery at scale. Due to these current limitations in platform search capabilities, identifying AI-generated content remains a laborious process, requiring ongoing and often manual monitoring, whether by the research community, electoral officials, or the public at large.”

California deepfake law a ‘blunt tool that hinders humorous expression,’ says judge

In a notably humorless ruling on the mechanics of humor, U.S. U.S. District Judge John A. Mendez has granted a preliminary injunction blocking a California law that gives people the right to sue for damages caused by deepfakes, saying it likely violates the First Amendment.

AP quotes the ruling, in which Mendez which “most of AB 2839 acts as a hammer instead of a scalpel, serving as a blunt tool that hinders humorous expression and unconstitutionally stifles the free and unfettered exchange of ideas which is so vital to American democratic debate.”

The law has only been in place for a month, and was accompanied by two other bills putting tough restrictions on the use of AI to create false images or videos in political ads.

Public Citizen, which supported the law, has issued a statement from its co-president Robert Weissman, who says “the court’s decision misses the fundamental problem with deepfakes, which is not simply that they make false claims but that they show candidates saying or doing things that the candidates did not say or do. The court suggests that a targeted candidate can just respond with counter speech – but that is not true, where the candidate has to ask the public not to believe their eyes and ears.”

He notes that the court “recognized that labeling requirements, if narrowly tailored, could pass constitutional muster,” and says “there’s nothing about the First Amendment that ties our hands in addressing fraud and a here-and-now threat to democracy.”

Related Posts

Article Topics

 |   |   |   |   |   |   |   |   | 

Latest Biometrics News

 

Biometric Update Podcast digs into deepfakes with Pindrop CEO

Deepfakes are one of the biggest issues of our age. But while video deepfakes get the most attention, audio deepfakes…

 

Know your geography for successful digital ID adoption: Trinsic

A big year for digital identity issuance, adoption and regulation has widened the opportunities for businesses around the world to…

 

UK’s digital ID trust problem now between business and government

It used to be that the UK public’s trust in the government was a barrier to the establishment of a…

 

Super-recognizers can’t help with deepfakes, but deepfakes can help with algorithms

Deepfake faces are beyond even the ability of super-recognizers to identify consistently, with some sobering implications, but also a few…

 

Age assurance regulations push sites to weigh risks and explore options for compliance

Online age assurance laws have taken effect in certain jurisdictions, prompting platforms to look carefully at what they’re liable for…

 

The future of DARPA’s quantum benchmarking initiative

DARPA started the Quantum Benchmarking Initiative (QBI) in July 2024 to expand hardware capabilities and accelerate research. In April 2025,…

Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Market Analysis

Most Viewed This Week

Featured Company

Biometrics Insight, Opinion

Digital ID In-Depth

Biometrics White Papers

Biometrics Events