FB pixel

Only 0.1% of people can tell a deepfake, says iProov

Only 0.1% of people can tell a deepfake, says iProov
 

Only a tiny fraction of people – 0.1 percent – can accurately distinguish between real and fake content such as images and video, according to new research on deepfakes conducted by iProov. Despite these abysmal results, around 60 percent of individuals said they were confident in their deepfake detection skills.

During its survey, the biometric identification company asked 2,000 UK and U.S. to look for deepfakes. The results showed that older generations are more vulnerable to deepfakes: One in five consumers stated that they were not even familiar with the concept. Young people, however, tended to be overly confident in their capability to spot fake content.

The results show that many organizations and consumers are vulnerable to identity fraud, according to Andrew Bud, founder and CEO of iProov. The company argues that organizations should adopt biometric-based solutions with liveness detection to verify whether someone is a real person.

“Even when people do suspect a deepfake, our research tells us that the vast majority of people take no action at all,” says Bud. “Criminals are exploiting consumers’ inability to distinguish real from fake imagery, putting our personal information and financial security at risk.”

The survey participants were also 36 percent less likely to spot a synthetic video compared to a synthetic image, leading to the conclusion that deepfake videos are more challenging to identify compared to images.

UK companies facing surge of deepfake fraud

A report from fraud-prevention software maker Trustpair, on the other hand, shows how UK businesses are dealing with the surge of synthetic media fraud.

According to its data, 42 percent of companies have experienced at least two successful attacks with generative AI and deepfakes in the past year. The survey was based on interviews with 150 senior finance, treasury and accounts payable executives across the country.

Nearly three-quarters of the firms expect deepfake risks to grow in 2025 and have increased investment into fraud prevention technology. However, only 33 percent have invested in automated fraud prevention systems while many still rely on manual methods, such as callbacks and email-based validations.

Trustpair offers automated vendor account verifications to detect anomalies or suspicious transactions.

“The rise of AI-driven fraud demands a fundamental shift in how businesses approach payment security,” says Tom Abbey, senior fraud consultant at Trustpair UK. “Companies cannot rely solely on human intervention to counter such sophisticated attacks. Automation is the only way to stay one step ahead of cybercriminals.”

Reliance on social media could lead to greater vulnerability to deepfakes

New research from Nanyang Technology University (NTU) in Singapore seems to confirm iProov’s thesis that people are overconfident in their deepfake detection skills.

The scientists found that people who see the same deepfake video multiple times are more likely to believe it is true. This is because people are more likely to believe information is true if they hear it multiple times, regardless of its accuracy. The psychological phenomenon is called the “illusory truth effect.”

The study involved more than 8,000 people from Singapore, China, Indonesia, Malaysia, the Philippines, Thailand, the U.S. and Vietnam. Participants were shown viral deepfake videos of media personality Kim Kardashian, Meta founder Mark Zuckerberg, Russian President Vladimir Putin and actor Tom Cruise.

Those participants who had seen a viral deepfake of the celebrity before the study were more likely to believe its contents once they had been exposed to it again. NTU researchers also found that people who get their news from social media for news instead of dedicated news sources, such as news websites, newspapers and TV, are more likely to accept false claims as true.

“Given that mere exposure to deepfakes could reinforce false beliefs, policymakers could consider psychological mechanisms (such as the illusory truth effect) when developing educational campaigns focused on debunking deepfakes,” says Saifuddin Ahmed, assistant professor at NTU.

The study was published in the Journal of Broadcasting & Electronic Media.

Related Posts

Article Topics

 |   |   |   | 

Latest Biometrics News

 

Sweden issues RFI for new ABIS, Moldova issues biometric hardware tender

Sweden is considering purchasing a new biometric system that will help the country collect fingerprints and facial images of asylum…

 

Email service Kivra acquires digital ID firm Truid

Nordic email service Kivra, which handles official communication between citizens, companies and government agencies, has taken a step towards developing…

 

Identity verification, fraud prevention benefit from boom in real-time payments

On a classic episode of The Simpsons, when Homer is shown a deep fryer that can “flash fry a buffalo…

 

Rise of digital wallets integrating payments and digital identities across Asia

Digital wallets have grown from innovation to an essential financial instrument, easily integrating into billions of people’s daily activities. By…

 

Facephi touts ‘exceptional results’ on RIVTD face liveness detection test

Facephi is celebrating an “outstanding score” in the Remote Identity Validation Technology Demonstration (RIVTD) Track 3 test for Face Liveness…

 

InverID expands certification package with ETSI 119 461 compliance

Inverid’s NFC-based identity verification product ReadID now complies with applicable requirements of the ETSI 119 461 standard for unattended remote…

Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Most Viewed This Week

Featured Company

Biometrics Insight, Opinion

Digital ID In-Depth

Biometrics White Papers

Biometrics Events