FB pixel

Deepfakes are a weapon of mass manipulation and most people can’t spot them

Deepfakes are a weapon of mass manipulation and most people can’t spot them
 

Deepfakes have become a full-blown geopolitical and financial weapon, according to a new report IdentifAI. Most people, from corporate fraud teams to everyday internet users, are still struggling to spot them and the problem is only getting harder to contain.

Political manipulation makes for one third of deepfake incidents

From senior Israeli politicians to prominent U.S. political figures and Middle Eastern state entities, synthetic media is being used to target high-profile politicians and manipulate geopolitical narratives. Political manipulation currently accounts for nearly a quarter (24.6 percent) of the total deepfake threat landscape, according to an analysis by IdentifAI.

The deepfake detection company analyzed 10,000 deepfake incidents between 2020 and 2026. Its first Deepfake Intelligence Report showed that the technology has become an important attack method for threat actors, whether they aim to achieve political destabilization, financial fraud, or other forms of manipulation.

“Our findings confirm that deepfakes and other synthetic media are now commoditized tools for large-scale extortion and disinformation,” says Marco Ramilli, CEO and co-Founder at identifAI.​

Financial fraud accounted for a fifth (20.1 percent) of deepfake incidents, with the report suggesting that multi-modal techniques, including biometric bypass, will increase over the next year.

The report also highlights current trends in the misuse of synthetic media.​

Video is the most prevalent format for deepfake attacks (45.6 percent), followed by mixed media (25.2 percent), images (17.4 percent) and voice cloning (10.5 percent).

The most common social media site for distributing deepfakes was X, recording a 51.2 percent threat propagation among top social media sources. The Elon Musk-owned platform significantly outpaced TikTok (21.1 percent) and YouTube (10.0 percent).

The U.S. was the most targeted country, accounting for 46.9 percent of global deepfake incidents, including both political manipulation and financial fraud. Other countries recorded significantly smaller numbers of incidents, including the UK (8.2 percent), India (7.2 percent), Israel (6.6 percent) and Iran (2.9 percent).

Deepfakes in the time of war

Although countries such as Israel and Iran are still experiencing lower levels of deepfake incidents compared to the U.S., the current war against Iran threatens to inflame this trend.

“Amid rising tensions between Israel and Iran, cyberattacks have become a central tool in national defense, and deepfakes serve as a strategic instrument at the state level,” says Ori Segal, CEO of Israeli cybersecurity company Cyvore.​

Israel has become a major target for cyberattacks on critical infrastructure. Impersonating senior executives, forging real-time messages, and video call deception can disrupt critical processes, undermine decision-making, and even affect military operations and readiness, Segal argues in an opinion article in the Jerusalem Post.

​“What until recently seemed like an extreme scenario is now an everyday reality for organizations, even the most advanced and security-conscious,” he adds.

Companies struggle to detect deepfake attacks

Organizations are now well aware that deepfakes can pose threats on many levels. But even though 60 percent of fraud experts say financial losses have significantly increased since the arrival of generative AI, a similar percentage (58 percent) also admits they struggle to determine whether synthetic media was involved in the attack, according to a new survey from Experian.

​The research was conducted with Forrester Consulting and interviewed nearly 1,000 senior fraud decision-makers across EMEA and APAC.

Experian’s 2026 fraud report also showed 66 percent of fraud experts agree that generative AI is the biggest challenge to fraud prevention yet.​

Financial losses due to fraud, whether it’s deepfake-fueled or not, have increased 64 percent compared to last year, with telecommunications, financial services and e-commerce among the sectors suffering the biggest hit. More than two-thirds of respondents expect more fraud attacks in 2026.

Citizens also fail at recognizing deepfakes

Unsurprisingly, companies are not the only ones struggling to detect deepfakes. Although a large portion of regular internet users in Germany (47 percent) claim that they can recognize deepfakes, a study released by the German government shows otherwise.​

One third of Germans have never used common methods for uncovering deepfake videos or images, including looking for inconsistencies within an image, such as odd shadows and inconsistent hands. Only 19 percent of respondents checked the source for reliability, according to the 2026 Cybersecurity Monitor report commissioned by the German Federal Office for Information Security (BSI) and the Police Crime Prevention Office (ProPK).​

At least half of respondents expressed support for potential government measures against deepfakes, with the most popular measures including swift police intervention (58 percent), mandatory labeling for AI-generated content (57 percent) and technical verification systems (53 percent).

The survey included 3,000 respondents.

Related Posts

Article Topics

 |   |   |   |   |   | 

Latest Biometrics News

 

Vietnam mandates face biometrics for mobile device registration

A facial recognition process is now required for new mobile device registrations in Vietnam. The policy took effect April 15…

 

UK social engineering scams jump 62% as fraud tactics shift: BioCatch

While the United States is battling with credit card fraud and identity theft, UK consumers are being targeted by increased…

 

AI agent delegation via MCP has gaps a Murderbot could walk through

The introduction of Model Context Protocol (MCP) open standard developed by Anthropic has advanced the data-sharing capabilities of AI agents…

 

Yoti, Luciditi demo interoperable age check at 2026 GAASS

At the 2026 Global Age Assurance Summit in Manchester, UK providers Yoti and Luciditi have successfully demonstrated how interoperable digital…

 

UK to deploy biometric ID in prisons after 179 released in error

The UK government has announced the digitalization of the prison system, with a new biometric ID system aimed at preventing…

 

Alcohol retailers awaiting digital age checks lay out what they want from a solution

It’s clear how age assurance providers feel about age check technology. But what are UK retailers looking for from biometric…

Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Biometric Market Analysis and Buyer's Guides

Most Viewed This Week

Featured Company

Biometrics Insight, Opinion

Digital ID In-Depth

Biometrics White Papers

Biometrics Events