AI deepfakes are wreaking havoc on an unprepared financial industry
The identity of the firm targeted in a deepfake video scam that resulted in the loss of US$25 million has been revealed. Although news about the fraud broke in February, The Financial Times has now confirmed that the injection attack was on UK engineering collective Arup, a group of 18,500 designers and consultants that focuses on sustainable development.
The notorious incident, which took place at the group’s Hong Kong offices, involved a message from a fake CFO, followed by a video conference call that utilized digitally cloned deepfake avatars of the CFO and other executives to instruct an employee to make 15 different transfers to five Hong Kong bank accounts, totalling HK$200 million. Police investigations into the attack are ongoing but no arrests have been made. Arup’s east Asia chair, Andy Lee, stepped down weeks after it occurred.
Rob Greig, Arup’s global chief information officer, tells Dezeen that Arup is “subject to regular attacks, including invoice fraud, phishing scams, WhatsApp voice spoofing, and deepfakes,” and raises the alarm about “the increasing sophistication and evolving techniques of bad actors.”
Deepfake and identity fraud rates soaring in APAC region
He is not alone. WIO News reports that the Hong Kong Securities and Futures Commission (SFC) is also warning about a deepfake video scam in which an AI-generated avatar of Elon Musk promotes an illegitimate cryptocurrency trading platform called Quantum AI. Since the regulator altered Hong Kong police, some websites and social media pages related to the scam have been taken down. But some are still active.
Its relatively high volume of digital transactions makes Asia-Pacific ripe territory for fraudsters. Deepfake-related fraud in APAC increased by 1,530 per cent last year. Identity verification platform Sumsub reports in its Identity Fraud Report 2023 that Vietnam and Japan experience the highest number of deepfake attacks. But Hong Kong is one of the top five markets in Asia for identity fraud, with a rate of 3.3 percent in 2023. The SFC has issued 29 warnings about suspicious virtual asset trading platforms this year alone, 18 of which are related to cryptocurrency.
“Indonesia, Hong Kong, and Cambodia have more than doubled their identity fraud percentages between 2021 and 2023, indicating a growing concern,” says Sumsub’s report.
AI causing “acute digital identification and security crisis”: Marcotte
Michael Marcotte, CEO of artius.ID, is also on the bullhorn to warn about the risks that come with generative AI and deepfakes. Marcotte has highlighted the particular danger deepfake technology poses to free and fair elections. “Governments, either through a lack of ability or will, have failed to sufficiently defend democracy against deepfakes,” he says. Now, in an interview with Crowdfund Insider, he warns that the banking industry is facing “an acute digital identification and security crisis” prompted by tools like ChatGPT, Midjourney and other easily accessible generative AI tools. Banks, says Marcotte – a co-founder of the National Cybersecurity Center (NCC) – are not ready.
“Banking KYC processes are still relying on ID card, face, and address verification,” Marcotte says. “These procedures look neolithic against deepfakes and AI-powered identification fraud. These supposed guardrails, which in many banks still rely on software from an era when the only AI was Skynet, are rendered completely obsolete in the face of hackers who can generate documents and deepfakes to leapfrog facial and ID verification.”
The numbers back him up. Synthetic identity fraud is now the fastest-growing category of financial crime in the United States, leading to US$6 billion in losses. Presentation or liveness attacks are up 40 percent in 2024. “Banking execs need to wake up and realize just how much the ground has shifted beneath their feet,” Marcotte says. “KYC procedures are already looking like relics. If consumers and corporations lose trust in these institutions, then entire economies are put at risk.”
Calling for a “radical shift,” Marcottre notes that “one option available for banks is to relinquish control of KYC data and use decentralized storage providers. If custody of the data remains in the hands of the individual, then banks won’t open themselves up to litigation or expose their customers to fraud.”
Digital ID and biometrics firms have answers customers want
Jumio’s 2024 Online Identity Study considers how customers are feeling about it all – which is not great. The study from the biometric identity verification provider shows that 72 percent of consumers worry daily about being fooled by a deepfake, and want governments and regulators to do more to protect them.
While banks and other financial institutions may still be dragging their feet on cybersecurity, biometrics and digital identity verification firms are responding. Zoloz, which has offices in China, Singapore and the United States, recently released an update to its biometric deepfake detection software, which features upgraded defenses against evolving infiltration tactics, including AI face swapping attacks on facial recognition systems. Others are moving in the same direction – although the demand for AI deepfake detection has also led to an influx of questionable entrants into the market. Finding a third-party biometric authentication provider that can be trusted with sensitive identity data is key to navigating an increasingly fraudulent future.
Free live webinar on June 5
For more on the deepfake threat and how to counter it, register here for an upcoming free live video webinar with AI-based identity authentication firm ID R&D, “Video Deepfakes: How Real Are They? The Threat of Injection Attacks in the Age of Gen-AI,” taking place on June 5 at 11am ET.
Article Topics
APAC | biometric liveness detection | biometrics | deepfake detection | deepfakes | face swap | fraud prevention | ID R&D | Jumio | Sumsub | synthetic data
Comments