PhD student uses deepfake to pass popular voice authentication and spoof detection system
University of Waterloo (UW) cybersecurity PhD student Andre Kassis published his findings after being granted access to an account protected with biometrics using deepfake AI-generated audio recordings.
A hacker can create a deepfake voice with five minutes of the target’s recorded voice, which can be taken from public posts on social media, the research shows. GitHub’s open source AI software can create deepfake audio that can surpass voice authentication.
He used the deepfake to expose a weakness in the Amazon Connect voice authentication system, a UW release reveals. Four-second attacks on Connect had a 10 percent success rate, and attacks closer to 30 seconds were successful 40 percent of the time.
In response, the company added biometric anti-spoofing software that could find digital markers on a voice recording, revealing if it was made by a machine or human. This worked until Kassis used free software to remove the digital markers from his deepfakes.
His method can bypass less sophisticated voice biometric authentication systems with a 99 percent success rate after six tries, according to the announcement.
“Our attack,” says Kassis in his journal article, “targets common points of failure that all spoofing countermeasures share, making it real-time, model-agnostic, and completely blackbox without the need to interact with the target to craft the attack samples.” The countermeasures use easily identifiable and forgeable cues to differentiate between spoofed and authentic audio.
Professor Urs Hengartner, a computer science professor who is Kassis’ supervisor and report co-author, said that “by demonstrating the insecurity of voice authentication, we hope that companies relying on voice authentication as their only authentication factor will consider deploying additional or stronger authentication measures.”
Article Topics
Amazon Connect VoiceID | biometrics | biometrics research | deepfakes | spoof detection | voice authentication | voice biometrics
Comments