Deepfakes are a weapon of mass manipulation and most people can’t spot them

Deepfakes have become a full-blown geopolitical and financial weapon, according to a new report IdentifAI. Most people, from corporate fraud teams to everyday internet users, are still struggling to spot them and the problem is only getting harder to contain.
Political manipulation makes for one third of deepfake incidents
From senior Israeli politicians to prominent U.S. political figures and Middle Eastern state entities, synthetic media is being used to target high-profile politicians and manipulate geopolitical narratives. Political manipulation currently accounts for nearly a quarter (24.6 percent) of the total deepfake threat landscape, according to an analysis by IdentifAI.
The deepfake detection company analyzed 10,000 deepfake incidents between 2020 and 2026. Its first Deepfake Intelligence Report showed that the technology has become an important attack method for threat actors, whether they aim to achieve political destabilization, financial fraud, or other forms of manipulation.
“Our findings confirm that deepfakes and other synthetic media are now commoditized tools for large-scale extortion and disinformation,” says Marco Ramilli, CEO and co-Founder at identifAI.
Financial fraud accounted for a fifth (20.1 percent) of deepfake incidents, with the report suggesting that multi-modal techniques, including biometric bypass, will increase over the next year.
The report also highlights current trends in the misuse of synthetic media.
Video is the most prevalent format for deepfake attacks (45.6 percent), followed by mixed media (25.2 percent), images (17.4 percent) and voice cloning (10.5 percent).
The most common social media site for distributing deepfakes was X, recording a 51.2 percent threat propagation among top social media sources. The Elon Musk-owned platform significantly outpaced TikTok (21.1 percent) and YouTube (10.0 percent).
The U.S. was the most targeted country, accounting for 46.9 percent of global deepfake incidents, including both political manipulation and financial fraud. Other countries recorded significantly smaller numbers of incidents, including the UK (8.2 percent), India (7.2 percent), Israel (6.6 percent) and Iran (2.9 percent).
Deepfakes in the time of war
Although countries such as Israel and Iran are still experiencing lower levels of deepfake incidents compared to the U.S., the current war against Iran threatens to inflame this trend.
“Amid rising tensions between Israel and Iran, cyberattacks have become a central tool in national defense, and deepfakes serve as a strategic instrument at the state level,” says Ori Segal, CEO of Israeli cybersecurity company Cyvore.
Israel has become a major target for cyberattacks on critical infrastructure. Impersonating senior executives, forging real-time messages, and video call deception can disrupt critical processes, undermine decision-making, and even affect military operations and readiness, Segal argues in an opinion article in the Jerusalem Post.
“What until recently seemed like an extreme scenario is now an everyday reality for organizations, even the most advanced and security-conscious,” he adds.
Companies struggle to detect deepfake attacks
Organizations are now well aware that deepfakes can pose threats on many levels. But even though 60 percent of fraud experts say financial losses have significantly increased since the arrival of generative AI, a similar percentage (58 percent) also admits they struggle to determine whether synthetic media was involved in the attack, according to a new survey from Experian.
The research was conducted with Forrester Consulting and interviewed nearly 1,000 senior fraud decision-makers across EMEA and APAC.
Experian’s 2026 fraud report also showed 66 percent of fraud experts agree that generative AI is the biggest challenge to fraud prevention yet.
Financial losses due to fraud, whether it’s deepfake-fueled or not, have increased 64 percent compared to last year, with telecommunications, financial services and e-commerce among the sectors suffering the biggest hit. More than two-thirds of respondents expect more fraud attacks in 2026.
Citizens also fail at recognizing deepfakes
Unsurprisingly, companies are not the only ones struggling to detect deepfakes. Although a large portion of regular internet users in Germany (47 percent) claim that they can recognize deepfakes, a study released by the German government shows otherwise.
One third of Germans have never used common methods for uncovering deepfake videos or images, including looking for inconsistencies within an image, such as odd shadows and inconsistent hands. Only 19 percent of respondents checked the source for reliability, according to the 2026 Cybersecurity Monitor report commissioned by the German Federal Office for Information Security (BSI) and the Police Crime Prevention Office (ProPK).
At least half of respondents expressed support for potential government measures against deepfakes, with the most popular measures including swift police intervention (58 percent), mandatory labeling for AI-generated content (57 percent) and technical verification systems (53 percent).
The survey included 3,000 respondents.
Article Topics
AI fraud | deepfake detection | deepfakes | Experian | generative AI | IdentifAI | synthetic data





Comments