Deepfake videos looked so real that an employee agreed to send them $25 million
In a jarring demonstration of how AI-generated biometric deepfakes have hypercharged fraud tactics, a virtual meeting with video deepfakes of senior management tricked an employee at an unidentified multinational firm into transferring US$25 million into five Hong Kong bank accounts.
The South China Morning Post reports that the finance employee at the firm’s Hong Kong office suspected a meeting request from his company’s chief financial officer was a phishing email because it mentioned “secret transactions”. But on the subsequent video call, officials including the firm’s chief financial officer looked and sounded enough like real people to dispel his initial suspicions. The employee subsequently fulfilled the deepfake avatars’ request to make 15 transfers totalling HK$200 million.
The deepfakes were created using publicly available footage, and were discovered once the employee checked in with the company’s head office – by which time, the money was already gone.
Police have not identified the name of the firm or the employee.
Deepfakes have quickly become a public nuisance and fraud risk of mainstream proportions. Pornographic deepfakes of popular recording artist Taylor Swift recently made headlines and highlighted the disturbing ways in which deepfake technology can impinge on privacy rights.
Gartner says deepfakes eroding trust in single-solution identify verification
New research from management consulting firm Gartner shows that by 2026, 30 percent of enterprises will no longer consider identity verification and authentication tools using face biometrics to be reliable in isolation, due to the evolution of AI-generated deepfakes.
In a company release, VP Analyst Akif Khan says current standards and testing to assess presentation attack detection (PAD) mechanisms “do not cover digital injection attacks using the AI-generated deepfakes that can be created today.”
Gartner’s research shows that presentation attacks remain the most common attack vector. But injection attacks are on the rise, with generative AI driving an increase of 200 percent in 2023. Gartner says the numbers show the need for fraud prevention measures that combine PAD, injection attack detection (IAD) and image inspection – and the wisdom of engaging biometrics and digital ID vendors whose products can demonstrate that they are up to the task of detecting liveness in an increasingly fake world.
“Organizations should start defining a minimum baseline of controls by working with vendors that have specifically invested in mitigating the latest deepfake-based threats using IAD coupled with image inspection,” says Khan.