Quality, depth, and scale of AI fraud poses greatest financial services threat in history
By Mike Gross, VP, Applied Fraud Research & Analytics at Experian
Over the last year, artificial intelligence (AI), especially ChatGPT and Bard, garnered a great deal of coverage due to their ability to generate a wide range of data-driven outputs such as marketing content, reports, and even movie screenplays. Following these novel applications of the technology, fraud-related uses quickly appeared in the headlines, along with examples of AI‑generated emails, text messages, and deep-fake videos and voice calls.
For those who have been in the fraud business, the use of AI to perform phishing campaigns and other scams has been around for years. However, with the increasing availability of generative AI to everyone, the biggest concerns with AI fraud involve the quality, depth, and scale of those attacks, particularly around impersonation of people and businesses. The latest schemes can take the underlying identity data of an individual from mass compromises and create a realistic consumer record with a full historical dossier of personal information, complete with physical and digital artifacts. That includes near-perfect fakes of government documents such as driver’s licenses and high-quality biometrics, along with a long digital trail in social media, employment records, tenured addresses, phones, emails, etc. to create synthetic identities that are truly lifelike.
Then, fraudsters can use AI to build and launch tools to scale this capability to be in the hands of anyone who wants to generate a new identity – or hundreds of thousands of them – in seconds.
In the past, fraudsters manually created these synthetic identities, requiring a lot of research and time to establish an offline and digital presence. Now, criminal rings have generative AI at their fingertips to do the same in-depth work at scale with much higher quality. Plus, a fully automated, AI-enabled attacker system can produce a multitude of documents and respond via voice, video, email, or chat just like a human across thousands of simultaneous social engineering attempts in real time. Of course, the fraudsters need access to identity-related data to create these fake identities, and it is critical that data is not in the public domain.
Where and how are AI-based fraud attacks happening?
Even if consumers or businesses put stringent precautions in place, the hyper-personalization of attacks is a very concerning new trend. On the consumer side, unsuspecting victims are increasingly being scammed by highly personalized and targeted attacks that trick them into making instant transfers and peer-to-peer real-time payments with the legitimate account holder making the request and passing any strong authentication measures that can drain accounts in seconds. And for financial institutions that take on the liability for consumer-initiated actions like funds transfers, this type of fraud carries significant monetary risk at scale.
On the business side, bad actors have used Business Email Compromise (BEC) for years. But using generative AI, those same attacks can mimic the voice or style of writing of a company executive to make much more convincing requests of employees to perform financial transactions or to gain access to confidential information. In government, fraudsters can target state and federal officials, understand the chain-of-command protocols and approvals in a manner that is very similar to corporate attacks, then seek access to sensitive materials from top-ranking officials or those with top-secret clearance.
To help facilitate these attacks, generative AI enables virtually anyone to automate and scale the process of setting up multiple realistic bank, ecommerce, healthcare, government, and social media accounts and apps to phish a consumer or executive’s credentials and identity data. In these cases, it’s extremely difficult for victims to differentiate legitimate, trusted organizations from generated or copied content with working links and pages on nearly perfect spoofed sites.
In a battle against AI fraud, fighting fire with fire is the only way to keep up, with major brands employing AI technology to automatically scan for sites and apps similar to theirs, and quickly spotting new patterns or inconsistencies in consumer behaviors on their own channels.
Another battlefront is new account openings. Organizations increasingly need multiple layers of identity verification and authentication controls as well as the capability to quickly identify anomalies and unusual behaviors to identify attackers who have used AI to socially engineer victims. This requires a combination of visible and invisible controls that not only verify and authenticate identities but can assess manipulation and intent.
For example, companies often verify a consumer’s identity data against trusted data sources, then use identity authentication measures such as one-time passwords (OTPs), phone checks, document verification tools, and voice biometrics. Those are all powerful, necessary solutions. However, attackers armed with AI can now create voice deep fakes from a collection of social media videos or socially engineer victims by building convincing customer-support scripts that are trained on each of the top banks’ or merchants’ authentication processes and fraud flows. Then, employing phone banks of voice bots to call thousands of consumers simultaneously to gather OTPs or pins, fraudsters can implement massive new-account-opening blasts or credential-stuffing attacks.
Combatting fraud with the rise of biometric attacks
With so many easier avenues to perpetrate fraud, attackers haven’t needed to use sophisticated deep-fake voice, image, and video to pass facial or voice biometric checks. However, as organizations adopt additional layered controls for identity verification, authentication, and transactional risk, bad actors will try to exploit these types of attacks. Here are a few ways businesses can protect themselves:
- First and foremost, organizations must continue to educate consumers and customers in personalized ways and through numerous communications channels (e.g., websites, video tutorials, e-newsletters, mailers). A proactive education effort helps ensure that consumers are aware of the latest fraud attacks, active participants in their own protection, and the first line of defense against attackers.
- Another key for companies against these attacks is that fraud-prevention and identity‑protection processes can no longer happen in silos. All data and controls must feed systems and teams that can analyze signals, define holistic decisioning logic, and build models that are continuously trained on good and bad traffic. This helps enable a seamless, personalized experience for authentic consumers, while blocking attempts from AI-enabled attackers. Businesses also need to capture and aggregate data across their public domains and internal systems and consolidate visibility into systems that monitor and alert for anomalies throughout the customer journey, from upfront cyber controls to customer onboarding, account management, and transactions.
- This consolidation includes data from security and cyber controls because the weakest link is often new web and app vulnerabilities or employees providing credentials via phishing attacks. Bringing together all their offline, online, bot, behavioral, biometric, transactional, and any cross‑industry sources of data will enable companies to instantly spot early signals of potential fraud, mitigate the risk, and ensure a positive customer experience with their products and services.
While widespread use of biometric AI fraud is still coming, the time to prepare for deep-fake voice, image, and video attacks is now. Educating consumers, consolidating cross-organizational fraud-prevention and identity‑protection processes, and centralizing visibility into systems that monitor and alert for anomalies throughout the customer journey are just the first steps in what will be a very long-term defense strategy against AI-enabled attackers.
About the author
Mike Gross is Vice President of Applied Fraud Research & Analytics for the Global Fraud & Identity group that is part of Experian’s Software Solutions business. His focus areas include fraud analytics, identity and payment authentication technologies, strategic partnerships, understanding emerging fraud threats, and optimizing fraud performance strategies.
DISCLAIMER: Biometric Update’s Industry Insights are submitted content. The views expressed in this post are that of the author, and don’t necessarily reflect the views of Biometric Update.
Article Topics
biometrics | digital identity | Experian | financial services | fraud prevention | generative AI
Comments