FB pixel

Deloitte predicts losses of up to $40B from generative AI-powered fraud

Deepfake threat makes AI-based defenses necessary to stay competitive
Deloitte predicts losses of up to $40B from generative AI-powered fraud
 

When an employee on a video call at Arup’s Hong Kong offices transferred US$25 million to fraudsters at the request of deepfake avatars of his company’s executives, they arguably triggered a paradigm shift in the financial industry’s general awareness of generative AI. The case has been cited time and again as proof that a new era of deepfake fraud is upon us, with the financial sector as its main target. Increasingly, the common recommendation is to invest in AI-based defenses and training, and to remain agile as the AI threat landscape continues to evolve.

Deloitte says firms need AI tools, third-party partners and new hires to fight fraud

New predictions from Deloitte look at how generative AI magnifies the risk of deepfakes and other fraud in banking – and what businesses can do in response.

Attacks like the Arup hack, says Deloitte, “will likely proliferate in the years ahead as bad actors find and deploy increasingly sophisticated, yet affordable, generative AI to defraud banks and their customers. Deloitte’s Center for Financial Services predicts that gen AI could enable fraud losses to reach US$40 billion in the United States by 2027, from US$12.3 billion in 2023, a compound annual growth rate of 32 percent.”

The conservative prediction pegs losses at closer to $22 billion, which is still cause for significant concern.

“There is already an entire cottage industry on the dark web that sells scamming software from US$20 to thousands of dollars,” says Deloitte’s assessment. “This democratization of nefarious software is making a number of current anti-fraud tools less effective.” Business email compromises remain a major problem, in that they leverage vulnerabilities and can cause substantial losses.

Risk management frameworks are in need of updating, and biometric and digital identity tools are available. “While old fraud systems required business rules and decision trees, financial institutions today are commonly deploying artificial intelligence and machine learning tools to detect, alert, and respond to threats,” says Deloitte, naming JP Morgan and Mastercard among large financial firms that have developed and launched AI-based defenses against fraud.

Banks, the assessment says, will need robust AI-based defenses to maintain a competitive edge. They will also need to adapt. “There won’t be one silver-bullet solution, so anti-fraud teams should continually accelerate their self-learning to keep pace with fraudsters. Future-proofing banks against fraud will also require banks to redesign their strategies, governance, and resources.”

Deloitte recommends looking outside the banking sector for “knowledgeable and trustworthy third-party technology providers.” Banks should “be actively participating in the development of new industry standards.” And they should bring in compliance early in the process in anticipation of regulatory queries in the future.

More agile fraud teams are also needed, which means hiring – a necessity likely to come with costs. “For many banks, these investments will be expensive and difficult; they’re coming at a time when some bank leaders are prioritizing managing costs,” says Deloitte. “But to stay ahead of fraudsters, extensive training should be prioritized.”

Deepfake attacks hit crypto exchange accounts in ‘AI gang’ hits

Warnings are too often met with choruses of skepticism, and alarms about AI have been challenged. But proof of the financial threat is not hard to come by. Magazine by Cointelegraph has coverage of a deepfake AI “gang” that siphoned US$11 million from an OKX crypto exchange account, “within 25 minutes with no email or two-factor authentication warnings.”

The piece quotes Star Xu, founder of OKX, who claims a “coin-stealing hacking gang” is using deepfake AI to bypass the exchange’s facial recognition software. “Are all hacker gangs so arrogant now? The communication records between the perpetrators, beneficiaries, and victims of AI face-changing crimes constantly mislead the victims into believing that OKX stole the money,” Xu wrote. Investigations are ongoing.

DuckDuckGoose finally gets its seed after pecking away for years

Help is on the way: Silicon Canals reports that DuckDuckGoose, an AI deepfake detection startup based in Delft, Netherlands, has secured €1.3 million ($1.4 million) in a pre-seed round of funding. The fowl-friendly firm was founded in 2020 and bootstrapped its way to this point. Its software promises single-second analysis with API integration and 100 per cent understandable results with 95 percent accuracy.

“This achievement brings DuckDuckGoose AI one step closer to our mission of creating a digital environment where we can still believe what we perceive, thanks to our cutting-edge deepfake and GenAI detection technology,” says Parya Lotfi, CEO and co-founder of DuckDuckGoose.

Big Government is also marshaling a defense plan – or at least knows it should be. The Washington Post reports (paywalled) that the director of the U.S. Cybersecurity and Infrastructure Security Agency thinks letting Big Tech dictate what happens with deepfakes is a bad idea. Jen Easterly believes AI will “inflame” the threat of weaponized propaganda, and that legislation is needed to keep it from being used by bad actors to cause harm.

Related Posts

Article Topics

 |   |   |   |   |   | 

Latest Biometrics News

 

Age assurance shouldn’t lead to harvesting of kids’ data: Irish privacy watchdog

Age assurance requirements for pornography sites and platforms hosting extremely violent content will become mandatory in Ireland this July. Media…

 

Idemia reveals Armenia JV details, Saudi Arabia MoU, WVU biometrics research lab

Idemia is busily establishing new partnerships to develop biometrics for national projects, from Armenia to Saudi Arabia, and to further…

 

EU SafeTravellers project works to secure biometric digital travel credentials

Idemia Public Security, iProov, Vision-Box and Ubiquitous Technologies Company (Ubitech) are part of a European Union-funded project to introduce traveler…

 

World puzzled by lack of public trust in massive technology corporations

Sam Altman and Alex Blania, figureheads and evangelists for cryptically related firms World and Tools for Humanity, recently spoke at…

 

Milwaukee police debate trading biometric data for Biometrica facial recognition

Although it has pledged to seek public consultation before signing a contract with a biometrics provider, the Milwaukee Police Department…

 

Italian regulator holds out hopes to collect fine from Clearview AI

Italy data protection regulator, the Garante, has not given up on collecting the millions of euros in fines it imposed…

Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Market Analysis

Most Viewed This Week

Featured Company

Biometrics Insight, Opinion

Digital ID In-Depth

Biometrics White Papers

Biometrics Events