FB pixel

Financial fraud is exploding, fueled by cheap and easy generative AI tools

TransUnion report shows $3.1B exposed to synthetic identity fraud
Financial fraud is exploding, fueled by cheap and easy generative AI tools

If biometrics and digital identity providers seem hyperbolic about the risk of fraud and the importance of data security, consider these numbers from TransUnion’s 2024 State of Omnichannel Fraud Report, released today: from 2020 to 2023, data breaches in the U.S. increased by 157 percent, and one in seven newly created digital accounts are suspected to be fraudulent, with $3.1 billion left exposed to synthetic ID fraud attacks.

Showing a 14 percent jump year-to-year, synthetic identity fraud and fabricated or stolen identities are having major impacts on the retail, travel, and video gaming industries, among others, says a press release on the report. The trend shows fraudsters attempting to hijack transactions earlier on in the process of account signup, loan origination, and other onboarding and enrolment processes.

“This early phase of new account fraud may represent a paradigm shift of sorts among fraudsters,” says Steve Yin, SVP and global head of fraud solutions at TransUnion. “In lieu of using traditional tactics to gain access to and ultimately compromise existing accounts, they are increasingly choosing to create new accounts that they can control themselves.”

‘Pig-butchering’ among new and evolving financial fraud scams

Should anyone question the veracity of an industry report, Interpol would like a word. The international criminal police organization has published an assessment on global financial fraud that shows how technology enables the growth and increasing sophistication of organized crime. In a release, Interpol says generative AI, large language models and cryptocurrencies are among the technologies that have lowered the financial and technical hurdles to accessing sophisticated tools for fraud.

Secretary General Jürgen Stock calls the result “an epidemic in the growth of financial fraud, leading to individuals, often vulnerable people, and companies being defrauded on a massive and global scale.”

“With the development of AI and Cryptocurrencies,” Stock says, “the situation is only going to get worse without urgent action.”

Dominant types of fraud vary across continents. One new, rather aggressively named scheme currently on the rise in Asia, Africa and Europe is “pig butchering fraud,” which combines online romance scams and investment scams with cryptocurrency for a criminal process that mimics the fattening of a hog for slaughter.

Generative AI enables easier and better fraud for less money

In what is fast becoming a common trend, the rate of technological development continues to outpace adequate defensive responses. Generative AI has reached an inflection point at which accessing the tools to create a fake ID is as simple as opening up an AI image generator and issuing a prompt. According to an article by Tatiana Walk-Morris in Dark Reading, the Deloitte Center for Financial Services says synthetic identity fraud could lead to $23 billion in losses by 2030.

Ari Jacoby, CEO of AI-driven fraud detection tool Deduce, says generative AI will “crush” existing defenses against counterfeit IDs. “If you want to use that data that already exists for almost everybody to create a selfie, that’s not hard,” Jacoby says. “There’s an enormous group of bad guys, bad folks out there, that are now weaponizing this type of artificial intelligence to accelerate the pace at which they can commit crimes. That’s the low end of the spectrum. Imagine what’s happening on the high end of the spectrum with organized crime and enormous financial resources.”

What to do? Using tech to fight tech is a popular option: AI and behavioral analytics can be deployed to distinguish between real customers and synthetic ones. Advances in biometrics,  identity verification and authentication tech are useful. In short, there are solutions – but it matters where they come from.

Big names recognize the problem. Google Cloud has announced a partnership with BioCatch to extend fraud-prevention efforts into new and expanding markets, particularly Southeast Asia. Per a release, BioCatch and Google Cloud will support financial institutions in the APAC region, which is facing rapid growth in financial cybercrime and, in the words of BioCatch APAC’s Richard Booth, “sophisticated social engineering tactics that have proved difficult to foil in real-time with legacy security controls.”

A release from Mastercard says it also has a new fraud-focused deal, this one with payments platform Network International, which will offer Mastercard’s AI-driven Brighterion fraud prevention platform to over 60,000 merchants in Africa and the Middle East. Brighterion uses continuously updated machine learning algorithms to monitor transactions for compliance, to combat what company representatives call the rapid evolution of cyber threats. Mastercard offers behavioral biometrics through its subsidiary NuData.

Startups are also adding behavioral biometrics to prevent fraud, with San Francisco-based Darwinium joining the market this week. Darwinium added digital signatures based on behavioral biometrics to its online trust software. The behavior signatures combine with device signatures for what Darwinium calls “Digital DNA.”

Flexibility and real-time monitoring help to fight generative AI

A key message in the discussion on how to safeguard businesses against generative AI, deepfakes and other vectors for financial fraud centers on diligence and adaptability. Continuous learning, real-time threat detection and AI-assisted biometric systems are needed to ensure that evolution in prevention matches evolution in risk.

However, there is a hard truth to face, according to Matt Miller, principal of cybersecurity services at KPMG U.S. “It’s incumbent upon the institutions that are providing this technology to not only understand them,” he says, “but to really understand the risks associated with them and be able to educate on the proper use and also be able to control their own platforms.”

In other words, no matter how adept the industry gets at defense, there is an onus of responsibility on those developing AI technology to help mitigate the risks they are creating – something at which tech entrepreneurs have so far, regrettably, not been very good.

Related Posts

Article Topics

 |   |   |   |   |   |   |   |   |   | 

Latest Biometrics News


Physical, digital spoof detection for ID documents upgraded by IDScan.net

IDScan.net is introducing new tools to detect more instances of tampering on identity documents. The new features to detect physical…


Social media giants face the wrath of new Australian government committee

Australia continues to harden its stance on social media. In a release from the office of Minister for Communications Michelle…


New Zealand trust framework looks to onboard digital ID providers

In an interview with Radio New Zealand (RNZ), Digital Identity New Zealand Executive Director Colin Wallis confirms that the country’s…


Google introduces passkeys for users at high risk of cyberattacks

Google users now have the option to use passkeys for account security in signing up for its Advanced Protection Program…


EU residency permits to exempt Brits from biometric checks at UK border

The UK government has confirmed its intention to implement the changes necessary to accommodate the European Union’s new digital Entry/Exit…


DHS to hold industry day for biometric scanner contract

A hybrid industry day will be held by the U.S. Department of Homeland Security on July 29 to share information…


Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Most Read This Week

Featured Company

Biometrics Insight, Opinion

Digital ID In-Depth

Biometrics White Papers

Biometrics Events