FB pixel

Researchers find voice recognition systems easily tricked by impersonators

 

Researchers from the University of Eastern Finland have published a new study that reveals that it is relatively easy for skilled voice impersonators to dupe advanced voice recognition systems because they aren’t efficient in recognizing voice modifications, according to a report by V3.

Though the majority of new mobile devices are integrated with built-in voice recognition and command capabilities, many of these systems fail to have adequate security mechanisms in place. As a result, these systems can be compromised by hackers, according to the study.

These voice recognition and command services are used to dictate messages, translate phrases and perform search queries. Their increasing adoption presents a potential opportunity for cyber criminals.

The study shows that these criminals are using various technologies — including voice conversion, speech synthesis and replay attacks — to compromise speaker recognition software.

And while experts are devising various techniques and countermeasures to combat these attacks, voice modifications generated by humans cannot be detected easily.

The study finds that voice impersonation is common in the entertainment industry, with professionals and amateurs alike copying the voice characteristics of speakers, particularly public figures.

The practice of “voice disguise”, whereby speakers alter the way they speak in order to avoid being recognized, can frequently occur in situations that don’t require face-to-face communication.

As a result, criminals can blackmail unsuspecting people or conduct threatening calls. These threats call for a need to improve the accuracy of voice recognition systems so that they aren’t susceptible to human-induced voice modifications.

The researchers analyzed the speech of two professional impersonators mimicking eight Finnish public figures, as well as acted speech from 60 Finnish speakers who participated in recording sessions.

The speakers were asked to alter their voices to make themselves sound older or younger, and many of them were able to successfully fool the speech systems.

“Biometrics technology has been shown to significantly reduce fraud, especially in the financial sector – but it’s not the whole solution,” Tom Harwood, chief product officer and co-founder at Aeriandi, said. “Earlier this year, twins tricked the HSBC voice biometrics security system, and this instance showed that no security technology is 100 percent fool-proof.

“Technology advances have also shown that it is now possible to cheat voice recognition systems. Voice synthesiser technology is a great example. It makes it possible to take an audio recording and alter it to include words and phrases the original speaker never spoke, thus making voice biometric authentication insecure.

“The good news is that there is a way to protect against phone fraud beyond biometrics – and that’s fraud detection technology. Fraud detection on voice looks at more than the voice print of the user; it considers a whole host of other parameters. For example, is the phone number being used legitimate? Where is the caller located? Increasingly phone fraud attacks on UK banks come from overseas. Voice Fraud technology has been proven to protect against this as well as domestic threats.”

Last month, Opus Research released a report that aims to dispel fears and myths of voice biometrics.

Article Topics

 |   | 

Latest Biometrics News

 

Biometrics adoption strategies benefit when government direction is clear

Biometrics providers have major growth opportunities ahead where there is clarity about their role. What part governments play in digital…

 

Biometric Update Podcast digs into deepfakes with Pindrop CEO

Deepfakes are one of the biggest issues of our age. But while video deepfakes get the most attention, audio deepfakes…

 

Know your geography for successful digital ID adoption: Trinsic

A big year for digital identity issuance, adoption and regulation has widened the opportunities for businesses around the world to…

 

UK’s digital ID trust problem now between business and government

It used to be that the UK public’s trust in the government was a barrier to the establishment of a…

 

Super-recognizers can’t help with deepfakes, but deepfakes can help with algorithms

Deepfake faces are beyond even the ability of super-recognizers to identify consistently, with some sobering implications, but also a few…

 

Age assurance regulations push sites to weigh risks and explore options for compliance

Online age assurance laws have taken effect in certain jurisdictions, prompting platforms to look carefully at what they’re liable for…

Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Market Analysis

Most Viewed This Week

Featured Company

Biometrics Insight, Opinion

Digital ID In-Depth

Biometrics White Papers

Biometrics Events