Voice biometrics need hybrid model for secure authentication, says Meta patent

Engineers at Meta have filed for a patent to cover an authentication system that combines voice biometrics with skin vibrations.
The abstract describes an authentication system that “detects, via a microphone array, airborne acoustic waves corresponding to a vocalization of a user. The system also detects, via a vibration measurement assembly, vibration of tissue of the user caused by the vocalization.” Detected airborne acoustic waves and recorded tissue vibrations are turned into an authentication dataset which can be used as part of a user authentication process.
Hardware-wise, the model described uses a headset or near eye display (NED) and may include, among other components, a depth camera assembly (DCA), an audio system and a position sensor.
The innovation comes at a time when deepfakes and voice cloning are becoming a significant problem in enabling financial fraud. City A.M. reports on surge in so-called CEO scams targeting executives of FTSE companies, saying at least five FTSE 100 companies and one FTSE 250 firm have fallen victim to deepfake attacks in 2024. The actual number is suspected to be much higher, given how many cases go unreported – according to the Crime Survey of England and Wales, around 80 percent.
Meta’s patent says given the need for robust deepfake detection, voice biometrics as a sole means of authentication “may not be safe as one person can easily hack another person’s voice (either through computer generation, or by impersonating their voice) in order to hack the device.”
Freely available voice cloning software and other AI tools compound the problem. A new report from McAfee, entitled “Beware the Artificial Impostor,” says it’s possible to clone a voice from just three seconds of audio – easily acquired, considering more than half of all adults share their voice at least once a week online or on social media.
The costs of AI scams are high, both financially and socially. McAfee says 25 percent of adults surveyed globally have experience of an AI voice scam, with one in 10 targeted personally. More than a third of people who’d lost money on AI scams reported losing over $1,000, while 7 percent were swindled out $5,000 to $15,000. And the rise of deepfakes and disinformation has eroded trust in online content, with 32 percent of adults saying they’re now less trusting of social media than ever before.
“Artificial intelligence brings incredible opportunities, but with any technology there is always the potential for it to be used maliciously in the wrong hands,” says McAfee CTO Steve Grobman.
“This is what we’re seeing today with the access and ease of use of AI tools helping cybercriminals to scale their efforts in increasingly convincing ways. Instead of just making phone calls or sending emails or text messages, with very little effort a cybercriminal can now impersonate someone using AI voice-cloning technology, which plays on your emotional connection and a sense of urgency to increase the likelihood of you falling for the scam.”
Hence Meta’s effort to solve the audio deepfake problem by using “a unique combination of vocalization of a user and vibration of tissue of the user caused by the vocalization to authenticate the user.”
Article Topics
biometric authentication | biometrics | deepfake detection | deepfakes | fraud prevention | McAfee | Meta | patents | synthetic voice | voice biometrics
Comments