Researchers find voice recognition systems easily tricked by impersonators
Researchers from the University of Eastern Finland have published a new study that reveals that it is relatively easy for skilled voice impersonators to dupe advanced voice recognition systems because they aren’t efficient in recognizing voice modifications, according to a report by V3.
Though the majority of new mobile devices are integrated with built-in voice recognition and command capabilities, many of these systems fail to have adequate security mechanisms in place. As a result, these systems can be compromised by hackers, according to the study.
These voice recognition and command services are used to dictate messages, translate phrases and perform search queries. Their increasing adoption presents a potential opportunity for cyber criminals.
The study shows that these criminals are using various technologies — including voice conversion, speech synthesis and replay attacks — to compromise speaker recognition software.
And while experts are devising various techniques and countermeasures to combat these attacks, voice modifications generated by humans cannot be detected easily.
The study finds that voice impersonation is common in the entertainment industry, with professionals and amateurs alike copying the voice characteristics of speakers, particularly public figures.
The practice of “voice disguise”, whereby speakers alter the way they speak in order to avoid being recognized, can frequently occur in situations that don’t require face-to-face communication.
As a result, criminals can blackmail unsuspecting people or conduct threatening calls. These threats call for a need to improve the accuracy of voice recognition systems so that they aren’t susceptible to human-induced voice modifications.
The researchers analyzed the speech of two professional impersonators mimicking eight Finnish public figures, as well as acted speech from 60 Finnish speakers who participated in recording sessions.
The speakers were asked to alter their voices to make themselves sound older or younger, and many of them were able to successfully fool the speech systems.
“Biometrics technology has been shown to significantly reduce fraud, especially in the financial sector – but it’s not the whole solution,” Tom Harwood, chief product officer and co-founder at Aeriandi, said. “Earlier this year, twins tricked the HSBC voice biometrics security system, and this instance showed that no security technology is 100 percent fool-proof.
“Technology advances have also shown that it is now possible to cheat voice recognition systems. Voice synthesiser technology is a great example. It makes it possible to take an audio recording and alter it to include words and phrases the original speaker never spoke, thus making voice biometric authentication insecure.
“The good news is that there is a way to protect against phone fraud beyond biometrics – and that’s fraud detection technology. Fraud detection on voice looks at more than the voice print of the user; it considers a whole host of other parameters. For example, is the phone number being used legitimate? Where is the caller located? Increasingly phone fraud attacks on UK banks come from overseas. Voice Fraud technology has been proven to protect against this as well as domestic threats.”
Last month, Opus Research released a report that aims to dispel fears and myths of voice biometrics.
Comments