Pindrop CEO demonstrates audio deepfake and discusses biometric fraud prevention at RSA 2020

Criminals are using online content such as from YouTube to synthesize the voices of company CEOs and managers to commit fraud with deepfake audio, Pindrop CEO Vijay Balasubramaniyan told an audience at the recent RSA 2020 event, Digital Information World reports.
Since the first reports emerged of deepfake audio being used successfully to defraud a company last year, the technique has led to a handful of incidents and roughly $17 million in losses, according to DIW. AI-generated audio can also be combined with phishing or spear-phishing attacks through email, in more sophisticated schemes.
Fairly realistic cloned voices can be generated from only five minutes of recorded material, Balasubramaniyan told the crowd, and five hours or more could fuel artificial reproductions that can deceive the closest examination by humans.
Balasubramaniyan demonstrated the attack technique by faking the voice of U.S. President Donald Trump from previous recordings in less than a minute. The example also shows the risk of the technology being used to generate and spread misinformation.
Despite this dire situation, deepfakes still make up a small percentage of fraud through the voice channel.
Pindrop algorithms can differentiate real speech from deepfakes, according to the CEO, by analyzing the pronunciation of words and matching it against human speech patterns.
Balasubramaniyan told Biometric Update a year ago that Pindrop could detect deepfake audio more than 90 percent of the time. Pindrop announced a new version of its Deep Voice 3 authentication software at RSA last week.
Article Topics
biometrics | deepfakes | fraud prevention | Pindrop | RSA | voice biometrics
Comments