FB pixel

Deepfakes a ‘now problem’ as EU AI Act passes compliance deadline: Reality Defender

Risk from impersonation scams, disinformation campaigns, reputational attacks is real
Deepfakes a ‘now problem’ as EU AI Act passes compliance deadline: Reality Defender
 

First it was Joe Biden, Kamala Harris and Taylor Swift. Now it’s Scarlett Johannson, Emmanuel Macron and Italy’s Defense Minister Guido Crosetto. No, it’s not the world’s most disappointing wax museum: these are among the growing number of public figures whose likeness or voice have been deepfaked.

The first compliance deadline of the EU AI Act went into effect on February 2, 2025. A blog post from Reality Defender says that the regulation is “a clear signal that deepfakes are no longer a futuristic concern, but a very real threat impacting businesses today.” It cites data from Sumsub showing that deepfake incidents worldwide increased by 245 percent year-over-year.

Macron shares deepfake montage at AI Action Summit

Transparency and disclosure are key elements of the Act, but some are worried that oversharing deepfakes could make them spread.

To kick off this week’s AI Action Summit taking place in Paris, French President Emmanuel Macron posted to social media a video of himself reacting to a montage of deepfaked videos of himself in popular movies and TV series. Critics have warned against “normalizing deepfakes,” which some say could accelerate the erosion of reality.

The BBC quotes Professor Philip Howard, president of the International Panel on the Information Environment, who says what might seem like playful fun becomes problematic, considering that “these kinds of videos are often released when the guidelines on public communication are not clear.”

Labels for AI content only as good as its creators’ intentions

Reality Defender CEO Ben Colman notes that the EU AI Act mandates clear labelling for any AI-generated or manipulated content. “This includes the use of watermarks or other technical markers to ensure that audiences can easily identify deepfake content.” But, Colman says,  “according to recent research, adversaries have demonstrated ways to remove or tamper with watermarks (FS-ISAC), and these solutions only work when content creators actively participate in marking their content as synthetic.”

Like most justice, deepfake detection only works if the human will is there. Leaders may preach about the dangers of AI, even as they use technology to craft their own ideologically driven versions of reality. ANI reports that at the AI Summit, India’s Prime Minister Narendra Modi highlighted the potential risk of deepfakes and disinformation, emphasizing that the technology must be rooted in local ecosystems. “We must develop open source systems that enhance trust and transparency,” Modi says. “We must build quality data centres free from biases, we must democratize technology and create people-centric applications.”

Per Amnesty International, “over the past 10 years, the government of Narendra Modi has repeatedly trampled on dissent and discriminated against religious minorities.”

Deepfake minister asks prominent Italians for €1M to free nonexistent hostages

Since human will is vulnerable to the madness of kings, truly effective deepfake detection needs to be inference-based, analyzing content directly for signs of manipulation, for the most robust and platform-agnostic approach. Reality Defender says that “unlike provenance solutions, inference detection works regardless of whether the content creator chooses to participate, making it particularly effective against malicious actors who deliberately avoid transparency measures.”

In the category of “did not choose to participate” falls Italian Defence Minister Guido Crosetto. Euro News reports that a scam using a voice deepfake to impersonate Crosetto asked some of the nation’s wealthiest individuals to wire money overseas.

The deepfake Crosetto targeted fashion designer Giorgio Armani, former Inter Milan owner Massimo Moratti, and Prada co-founder Patrizio Bertelli, among others. Each was asked to transfer around €1 million to a Hong Kong-based bank account. The money, they were told, was to free kidnapped Italian journalists in the Middle East.

On X last week, Crosetto called the effort a “serious scam” and “an absurd affair”.

Misuse of AI risks ‘losing a hold on reality’: Scarlett Johannson

Indeed, says Reality Defender, “the risks posed by deepfakes are not hypothetical – they are happening now, and the EU AI Act clearly indicates as such.” Seventy percent of global decision makers now consider deepfakes to be a significant threat to their businesses.

“We’ve seen real-world cases where deepfake technology has been used for executive impersonation scams, disinformation campaigns, and reputational attacks. These types of attacks don’t just threaten financial stability – they erode trust among employees, customers, and stakeholders.”

And they can come with sinister political overtones. EuroNews also has a story on a deepfake video featuring AI-generated avatars of Jewish celebrities protesting recent comments from rapper Kanye West on X, in which he indicated that he is a Nazi.

The video features AI-generated versions of Stephen Spielberg, Jerry Seinfeld, Drake, David Schwimmer, Adam Sandler, Natalie Portman, Sacha Baron Cohen, Jack Black, Ben Stiller, Adam Levine, Lenny Kravitz and Scarlett Johansson.

In a statement to People magazine, Johannson – who has already had one highly publicized run-in with AI fakery – condemns the video. “I am a Jewish woman who has no tolerance for antisemitism or hate speech of any kind,” she says. “But I also firmly believe that the potential for hate speech multiplied by AI is a far greater threat than any one person who takes accountability for it. We must call out the misuse of AI, no matter its messaging, or we risk losing a hold on reality.”

Related Posts

Article Topics

 |   |   |   |   | 

Latest Biometrics News

 

Calls for national standards grow as U.S. AI action plan takes shape

On February 6, the National Science Foundation’s (NSF) Networking and Information Technology Research and Development National Coordination Office (NCO) issued…

 

DOGE’s influence at SSA triggers legal and congressional scrutiny

An affidavit in support of an amended complaint and motion for emergency relief to halt Elon Musk’s so-called Department of Government Efficiency’s…

 

UK Online Safety Act passes first enforcement deadline, threatening big fines

One of the main reasons regulations are not especially popular among ambitious CEOs is that they can cost money. This…

 

Digital ID, passkeys are transforming Australian government services

Tax has gone digital in Australia, where businesses now need to use the Australian Government Digital ID System to verify…

 

Biometrics ‘the lynchpin of where gaming companies need to be,’ says gambling executive

Online gambling continues to be a fruitful market for biometrics providers, as betting platforms seek secure and frictionless KYC, onboarding,…

 

Surveillance, identity and the right to go missing

By Professor Fraser Sampson, former UK Biometrics & Surveillance Camera Commissioner Do we have a right to go missing? The global…

Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Most Viewed This Week

Featured Company

Biometrics Insight, Opinion

Digital ID In-Depth

Biometrics White Papers

Biometrics Events