FB pixel

Voice deepfakes on the rise; biometrics can help

Voice deepfakes on the rise; biometrics can help
 

A new type of deepfake is spreading, based on voice recordings. Voice biometric algorithms continue to improve, and threat actors are using them for fraud, identity theft and other illicit activities.

A recent Vice article has shown that several members on the 4chan social media platform have used ElevenLabs’ beta software to generate voices sounding like notables including Joe Rogan, Ben Shapiro and Emma Watson mouth racist or abusive remarks.

ElevenLabs provides “speech synthesis” and “voice cloning” services, allegedly to explore new frontiers of voice AI and help “creators and publishers seeking the ultimate tools for storytelling.”

In the wings is OpenAI’s Vall-e. According to TechCrunch, the model has made substantial advancements in the last few months and is now capable of generating convincing deepfakes.

Need more? Enter My Own Voice, AI-powered “voice banking” software by Acapela Group, a French startup. Presented at CES 2023 and spotted by DigitalTrends, My Own Voice is designed to aid people who are losing their ability to speak to recreate their voice.

The software can reportedly create a convincing voice using only three minutes of recorded audio.

How to tackle voice deepfakes with biometrics

Anti-spoofing measures are also being developed, however.

According to voice recognition engineers at Pindrop, call centers can take steps to mitigate the harm of voice deepfakes.

Companies can educate workers to the danger.

Callback functions can end suspicious calls and request an outbound call to the account owner for direct confirmation.

Finally, multifactor authentication (MFA) and anti-fraud solutions can reduce deepfake risks. Pindrop mentions factors like devising call metadata for ID verification, digital tone analysis and key-press analysis for behavioral biometrics.

Even China is working on deepfake regulation. As reported by the New York Times, the country unveiled stringent rules requiring manipulated material to have the subject’s consent and bear digital signatures or watermarks.

Whether regulations work on deepfakes is not known. However, rights advocates warn that they could further curtail speech in China.

Article Topics

 |   |   |   |   |   |   |   | 

Latest Biometrics News

 

Pitched as the future of work, agentic AI is not selling well – but fraudsters love it

Everyone wants to talk about agentic AI. Throughout 2025, AI agents have been hailed as both the future of work…

 

Facial age estimation spoof, VPN bypass claims called into question

Reports of the defeat of facial age estimation technology may be greatly exaggerated, and UK children have not flocked to…

 

Reddit users’ questions expose major shortcoming in age assurance effort

 “Am I the only one that’s confused about how they’re going to confirm if you’re a certain age?” So asks…

 

AI fraud threat continues to spur deepfake detection integration, investment, development

Reality Defender and 1Kosmos have announced a strategic partnership that will see the deepfake detection firm integrate its real-time deepfake…

 

iProov, Aware, Paravision power airport biometric boarding pilots at MCO

Airports across the Americas are accelerating their shift to biometric identity systems, with Orlando, Houston and Oklahoma City all rolling…

 

EU and Canada agree to collaborate on digital ID mutual recognition, pilots

Representatives of the European Union and Canada emerged from the meeting of the EU-Canada Digital Partnership Council on Monday with…

Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Biometric Market Analysis

Most Viewed This Week

Featured Company

Biometrics Insight, Opinion

Digital ID In-Depth

Biometrics White Papers

Biometrics Events