FB pixel

Survey shows cybersecurity experts see risk of deepfake fraud, but few have taken action

Survey shows cybersecurity experts see risk of deepfake fraud, but few have taken action
 

More than three out of four cyber security decision-makers at financial services companies are concerned about the potential for fraud leveraging deepfakes, but just over one in four (28 percent) have implemented measures to mitigate that threat, according to a survey released by iProov.

Asked what services are most threatened by deepfakes, half of respondents said online payments, while 46 percent said personal banking services. Their customers are at least somewhat concerned about deepfakes, according to 71 percent, and 64 percent say the threat will worsen.

Deepfakes are also the most likely approach to compromise facial authentication security, according to 43 percent of the more than 100 experts polled by iProov.

“Whilst it’s encouraging to see the industry acknowledge the scale of the dangers posed by deepfakes, the tangible measures being taken to defend against them are what really matter,” comments iProov Founder and CEO Andrew Bud.

“It’s likely that so few organisations have taken such action because they’re unaware of how quickly this technology is evolving. The latest deepfakes are so good they will convince most people and systems, and they’re only going to become more realistic.”

Facebook moved to ban deepfakes from its social media platform earlier this month to fight misinformation.

Bud was made a CBE for services to Great Britain’s exports in recent honors bestowed by Queen Elizabeth.

Related Posts

Article Topics

 |   |   |   |   |   |   |   | 

Latest Biometrics News

Comments

One Reply to “Survey shows cybersecurity experts see risk of deepfake fraud, but few have taken action”

  1. All this hand waving… At this moment in time, deepfakes are the problems of celebrities politicians. And, frankly, many are quite funny. As the tools get cheaper and better, individual account access could be impacted. But! If the biometric system is already robust against video, then deepfakes will be summarily handled, as they would be considered nothing more than video-based spoof attempts. If the attempt is not verified as a present and alive human, then it’ll be rejected.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Most Read This Week

Featured Company

Biometrics Insight, Opinion

Biometrics White Papers

Biometrics Events

Explaining Biometrics