FB pixel

Survey shows cybersecurity experts see risk of deepfake fraud, but few have taken action

Survey shows cybersecurity experts see risk of deepfake fraud, but few have taken action
 

More than three out of four cyber security decision-makers at financial services companies are concerned about the potential for fraud leveraging deepfakes, but just over one in four (28 percent) have implemented measures to mitigate that threat, according to a survey released by iProov.

Asked what services are most threatened by deepfakes, half of respondents said online payments, while 46 percent said personal banking services. Their customers are at least somewhat concerned about deepfakes, according to 71 percent, and 64 percent say the threat will worsen.

Deepfakes are also the most likely approach to compromise facial authentication security, according to 43 percent of the more than 100 experts polled by iProov.

“Whilst it’s encouraging to see the industry acknowledge the scale of the dangers posed by deepfakes, the tangible measures being taken to defend against them are what really matter,” comments iProov Founder and CEO Andrew Bud.

“It’s likely that so few organisations have taken such action because they’re unaware of how quickly this technology is evolving. The latest deepfakes are so good they will convince most people and systems, and they’re only going to become more realistic.”

Facebook moved to ban deepfakes from its social media platform earlier this month to fight misinformation.

Bud was made a CBE for services to Great Britain’s exports in recent honors bestowed by Queen Elizabeth.

Article Topics

 |   |   |   |   |   |   |   | 

Latest Biometrics News

 

Australia credential register blocks 750,000 fraudulent ID checks post-Optus breach

Australia’s response to the Optus data breach has blocked 750,000 fraudulent identity checks, as a government register designed to prevent…

 

UK lawmakers prepare for contentious national digital ID, police biometrics bills

Digital ID is one of 12 priority area for the UK government that may merit a place in the traditional…

 

UK project uses supercomputers, synthetic data to improve emotion recognition

UK supercomputing power will be used to test a new facial emotion recognition system that relies on synthetic image data….

 

Frontex sets biometrics, AI research agenda for Horizon Europe 2028-2034

European border control agency Frontex plans to research and develop biometric verification and non-intrusive detection technologies as part of its…

 

Stop treating identity as a compliance step. It’s infrastructure now

By Harry Varatharasan, Chief Product Officer, ComplyCube The UK governmentʼs digital identity consultation is closing, and for most commentators, this…

 

If you build it, they will leave: experts warn UK gov’t on digital ID approach

The UK Cabinet Office’s consultation on digital identity closed on Tuesday, and individuals and organizations are sharing their responses. The…

Comments

One Reply to “Survey shows cybersecurity experts see risk of deepfake fraud, but few have taken action”

  1. All this hand waving… At this moment in time, deepfakes are the problems of celebrities politicians. And, frankly, many are quite funny. As the tools get cheaper and better, individual account access could be impacted. But! If the biometric system is already robust against video, then deepfakes will be summarily handled, as they would be considered nothing more than video-based spoof attempts. If the attempt is not verified as a present and alive human, then it’ll be rejected.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Biometric Market Analysis and Buyer's Guides

Most Viewed This Week

Featured Company

Biometrics Insight, Opinion

Digital ID In-Depth

Biometrics White Papers

Biometrics Events