FB pixel

UAB researchers find that automated voice imitation can spoof voice authentication systems

 

A research team at University of Alabama at Birmingham has discovered that voice impersonation can be applied to trick both automated and human verification for voice authentication systems.

The research was authored by UAB graduate students Dibya Mukhopadhyay and Maliheh Shirvanian, researchers in UAB’s Security and Privacy In Emerging computing and networking Systems (SPIES) Lab, along with Nitesh Saxena, Ph.D., the director of the SPIES Lab and associate professor of computer and information sciences at UAB.

The team recently presented the research — which explores how attackers equipped with audio samples of another person’s voice could compromise their security, safety and privacy — at the European Symposium on Research in Computer Security (ESORICS) in Vienna, Austria.

Using an off-the-shelf voice-morphing software, the researchers were able to develop a voice impersonation attack for the purpose of breaching automated and human verification systems.

“Because people rely on the use of their voices all the time, it becomes a comfortable practice,” said Saxen. “What they may not realize is that level of comfort lends itself to making the voice a vulnerable commodity. People often leave traces of their voices in many different scenarios. They may talk out loud while socializing in restaurants, giving public presentations or making phone calls, or leave voice samples online.”

A would-be attacker could record a person’s voice using a few techniques, including being in close proximity to the speaker, conducting a spam call, by scouring the Internet for audiovisual clips, and by hacking into cloud servers that store audio data.

Using software that automates speech synthesis such as voice morphing, allows attackers to create a near-duplicate of an individual’s voice by using just few audio samples.

The technology can then transform the attacker’s voice to state any arbitrary message in the voice of the victim.

“As a result, just a few minutes’ worth of audio in a victim’s voice would lead to the cloning of the victim’s voice itself,” said Saxena. “The consequences of such a clone can be grave. Because voice is a characteristic unique to each person, it forms the basis of the authentication of the person, giving the attacker the keys to that person’s privacy.”

In its research, the UAB team investigated the consequences of stealing voices in two voice authentication-dependent applications and scenarios.

The first application involved a voice biometrics system that uses the so-called unique features of a person’s voice for authentication purposes.

Researchers found that once the study’s participants were able to fool the voice biometrics system by using fake voices, they could gain full access to the device or service.

The second application explored how stealing voices affected human communications, in which the researchers used voice-morphing tool to imitate Oprah Winfrey and Morgan Freeman in a controlled study environment.

The study’s participants were able to make the voice morphing system speak nearly any phrase in the victim’s tone and vocal manner to launch an attack that could potentially jeopardize their reputation and security.

The results clearly showed how automated verification algorithms were largely ineffective in blocking any of the attacks developed by the research team, with the average rate of rejecting fake voices being less than 10 to 20 percent for most victims.

In two online studies with about 100 participants, researchers found that participants rejected the morphed voice samples of celebrities as well as somewhat familiar users about half the time.

“Our research showed that voice conversion poses a serious threat, and our attacks can be successful for a majority of cases,” Saxena said. “Worryingly, the attacks against human-based speaker verification may become more effective in the future because voice conversion/synthesis quality will continue to improve, while it can be safely said that human ability will likely not.”

Saxena made a few recommendations on ways that people can prevent their voice from being stolen, which included increasing their awareness of these potential attacks, and being wary of uploading audio clips of their voices on social media.

Article Topics

 |   |   | 

Latest Biometrics News

 

Adoption of biometric payment cards plateaus with niche applications

Biometric payment cards, once seen to be the belle of the biometric ball, are mired in a rut of stagnated…

 

South Korea’s age assurance policies built on years of systemic, political change

A new paper from two scholars examines South Korea’s approach to age assurance. Published in TechPolicy.press, the paper contrasts global…

 

Zambia obtains World Bank funding support to advance DPI implementation

Zambia has secured funding to the tune of $120 million from the World Bank’s Digital Development Partnership to carry on…

 

Aadhaar enables an ‘epidemic’ of IDs in India

The Aadhaar ecosystem continues to grow, but it’s not all good news. The proliferation of IDs like the “One Nation,…

 

EU AI Act’s impact on businesses inspires simplification efforts

The European Union’s AI Act is already having a wide-reaching impact on business both inside and outside the economic bloc….

 

Chinese biometrics firms settle in Hong Kong for international market access

Chinese biometric recognition companies are eyeing Hong Kong as a springboard for expanding to foreign markets, according to company executives….

Comments

7 Replies to “UAB researchers find that automated voice imitation can spoof voice authentication systems”

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Most Viewed This Week

Featured Company

Biometrics Insight, Opinion

Digital ID In-Depth

Biometrics White Papers

Biometrics Events