Deepfake voice technology claims first fraud victims
Criminal deepfake attacks have claimed their first victims, with a British energy defrauded of nearly a quarter-million dollars through a wire transfer ordered by what seemed from the voice to be a company executive, The Washington Post reports.
The company’s managing director was phoned by what he thought was a company executive, according to representatives of French insurer Euler Hermes, and though he felt the wire request was strange, he complied, thinking he was following instructions from his boss.
When the thieves made a second request, the managing director acted on his suspicions and called the executive. The fraudulent version phoned while that call was still in progress, exposing the fraud.
Symantec researchers say they have discovered at least three such incidents, although it is unclear if that includes the above case. The losses in one case exceeded a million dollars.
The Post reports that systems freely available on the web can be used to create audio deepfakes without requiring much sophistication, speech data, or computing power. Artificial voice start-up Lyrebird says it can create a “vocal avatar” with a minute of real-world speech material.
Adversarial attacks have improved rapidly in recent years due to advances in algorithms and data processing, and the amount of data needed is decreasing, Symantec Senior Researcher Saurabh Shintre says. While the output tends to be imperfect, explanations and social engineering techniques such as applying pressure for fast decisions enable its effective use.
Pindrop CEO Vijay Balasubramaniyan told Biometric Update earlier this year that biometric security systems can detect fake audio content with accuracy above 90 percent. The AI Foundation and the Technical University of Munich teamed up to combat deepfakes in May.
Article Topics
authentication | biometrics | deepfakes | fraud prevention | identity verification | speech recognition | voice
Comments