The AI fraud scheme scammers use to bypass verification systems

By Konstantin Bulatov, Ph.D., Chief Technology Officer of OCR Studio
In the era of booming online services, many companies worldwide now rely on remote identity verification systems for KYC (Know-Your-Customer) checks and client onboarding. These systems allow users to access banking, fintech, telecom, insurance, and other services from virtually anywhere by simply uploading a photo or scan of an ID document. The adoption has been rapid – the global market for online identity verification is expected to exceed $18.2 billion by 2027.
At the same time, recent years have seen a sharp rise in fraud attacks through digital service channels. According to TransUnion’s report, 8.3% of all digital account creation attempts in the first half of 2025 were suspected of fraud, making it the highest risk stage in the customer lifecycle. Scammers have been leveraging so-called spoofing attacks – using stolen or fake ID photos to fool verification, such as pointing a phone screen showing someone else’s ID at a camera. They open bank accounts and take out loans with stolen identity documents, AI-generated or morphed fake IDs. To combat this, many companies upgraded their verification processes to require a live photo or video of the user holding their ID, matching the ID to the person’s face – and for a while such checks were pretty effective.
However, fraudsters have now learned to bypass these protections. Not so long ago, a new fast-growing fraud scheme appeared, leveraging AI bots to beat remote verification in a conveyor-belt fashion. Unlike earlier sophisticated forgeries that required expert skills and significant investments to produce, this new scheme works at scale and minimal cost. It overwhelms companies’ defenses with quantity over quality – carrying out thousands of attacks with almost no human intervention. The goal is to simply «punch through» a company’s verification system by sheer volume of attempts, and even the most reliable remote ID verification systems are struggling under this onslaught.
How the new volume-based fraud scheme works
The new scam technique involves specialized AI bots that fully automate every stage of the attack – from creating an immense amount of fake images to passing KYC checks and client onboarding at online services. In essence, the bots assemble fake «selfie with ID» photos and submit them to identity verification systems. The process works as follows:
- Gather leaked IDs. The bots start by collecting large amounts of stolen ID documents (passports, ID cards, driving licenses, etc.) from the dark web or hacker forums. There is an abundance of such leaked IDs available from past breaches of verification vendors and organizations. Sooner or later, those documents almost always end up being used by scammers.
- Find look-alikes. For each stolen ID, the AI searches the open web (especially social media) for people with similar facial features that are likely to be accepted as a match by verification systems. The goal is not to find a perfect twin, but someone «close enough» to pass the checks.
- Composite a «selfie with ID». Next, the bots glue together the person’s photo and the stolen ID document image into a single picture, crafted to look like a real selfie of that person holding his ID. There is almost no limit to the amount of fake images that can be generated.
- Mass-submit to verification systems. Finally, the bots automatically submit these composite images to various companies’ remote verification systems. At that scale, even a small false acceptance rate becomes exploitable, and some of the fakes inevitably pass the checks.
This scheme bets on volume and doesn’t care about each fake’s quality. On top of that, the entire production and attack process is fully automated, so generating thousands of forgeries costs the fraudsters practically nothing, and they succeed if even a tiny fraction of these fake images slip past the identity check. With the current pace of AI development, scammers can create endless variations of fake identities with minimal effort – a capability that simply did not exist at this scale a few years ago. As soon as one of the phony identities is verified, scammers can abuse the created account, taking out loans, obtaining credit cards, laundering money, and so on.
Why are such volume-based fraud schemes possible now
Two major factors have enabled the rise of such fraud schemes:
- Constant leaks of ID documents. There has been a steady stream of data breaches exposing identity documents, often due to weaknesses at third-party verification providers. Many remote ID verification vendors upload user ID photos and scans to cloud servers or even crowdsourcing platforms. Hackers intercept data at these stages, leading to massive leaks of ID images. The problem has become especially acute recently – in 2025, for example, we have seen an unprecedented surge in data breaches worldwide. Consider just a few incidents: Connex Credit Union, one of Connecticut’s largest credit unions, suffered a breach in June 2025 that exposed files of 172,000 customers, including government IDs used to open accounts. Around the same time, a hacker group infiltrated hotel systems in cities like Venice and Trieste – stealing up to 70,000 high-resolution ID scans. Later that year, Discord, a widespread social platform, had a breach on a third-party vendor’s side – over 70,000 images of user IDs provided for age verification were leaked. These three examples are just the tip of the iceberg. The glut of stolen IDs circulating online provides the raw fuel for the new fraud scheme – plenty of real ID data to feed into AI bots.
- Flaws in existing verification systems. Most modern remote ID verification systems are still imperfect: they are not accurate enough or simply do not conduct essential authenticity checks. In that environment, such mass attacks are almost guaranteed to succeed. The sheer volume of generated fakes lets criminals either «land» inside a system’s 0.1% error margin or hit its blind spots created by missing controls. As a result, organizations that rely on these weak verification systems are turning identity checks into a probability game – and the fraudsters are the ones who will keep winning.
How can businesses fight back against this new fraud
It’s grimly ironic that, while the industry is focusing on detecting high-quality forgeries like deepfakes or AI generated document images, many fraudsters keep cashing out with elementary «DIY» attacks. The solution, however, is straightforward in principle: businesses must choose their remote identity verification systems extremely carefully and demand the highest security standards. Organizations should consider how these systems handle data. Are user ID photos and personal data processed securely? Aren’t the PII (Personally Identifiable Information) sent to external cloud servers or crowdsourcing platforms where they could be leaked? Keeping the verification process within a controlled environment is a key. Companies need to ensure their KYC provider isn’t inadvertently providing «food» to fraudsters in the form of future leaked documents.
Fortunately, there are modern solutions on the market that address both the data leakage and the bots attack problems. The most robust approach is using on-device identity verification. On-device solutions run entirely on the end user’s smartphone or computer (leveraging the device’s CPU) instead of uploading sensitive data to a server. This means they physically cannot save or transmit user ID data to any third party. Moreover, an on-device system is resilient to the kind of attacks described above. If fraudsters try to bombard an on-device verification system with 1,000 fake images, all they achieve is maxing out their own device’s processing load. The protection system on the business’s side remains unaffected.
From a technical standpoint, building a high-accuracy face-matching and ID verification system that runs on-device is an extremely challenging task. Such systems must uphold top-tier speed and precision even when confined to the limited computing resources of a mobile device. At OCR Studio, we managed to solve this by developing a suite of ultra-lightweight neural networks. This let us create a verification system that performs an instant face-to-ID comparison right on the user’s device, providing security for both the user and the business.
Because the computation is decentralized to user devices, such systems are far more resistant to volume-based attacks like the new fraud scheme. Thus, on-device verification solutions offer a promising way forward, combining strong user privacy with robustness against the latest fraud techniques.
Your KYC is now a target
This volume-based, AI-assisted fraud wave is not just another threat – it undermines the very foundations of remote KYC. With the inevitable rise of such fraud schemes, businesses will suffer from enormous losses caused by constantly appearing unauthorised scam accounts more and more often. Unless the regulatory requirements are raised dramatically for identity verification vendors, remote processes will remain totally unreliable. Making on-premise solutions mandatory and prohibiting the storage and transmission of personal data is the only way to stop citizens’ identity documents from freely roaming across the internet.
Companies, in turn, must raise the bar on two fronts. First, build their infrastructure according to privacy-first principles and minimize the amount of clients’ data stored or transferred. Therefore, identity verification systems they choose should not transmit any data to clouds, external servers, or crowdsourcing platforms either, otherwise, customers’ PII may become tomorrow’s breach. Second, invest in automated anti-forgery technologies. By conducting various authenticity checks, they reliably detect any low-quality composites – even when they arrive by the thousands. These protection measures have to be deployed across all KYC and onboarding processes – that is the single option to preserve customers’ trust and security in an increasingly hostile digital environment.
About the author
Konstantin Bulatov is a scientist and Chief Technology Officer of OCR Studio, where he has led the development and implementation of advanced OCR technologies. He has designed a method for optimizing object recognition in video streams, which has improved the accuracy and efficiency of real-time OCR systems. Under his direction, OCR Studio develops secure on-device programming solutions that address diverse industry needs and contribute to advancements in the field.
Konstantin is an IEEE Senior Member, he has authored multiple patent applications and published his research in prominent academic conferences and journals. His work emphasizes innovative approaches to developing high-performance recognition systems, reinforcing OCR Studio’s position as a significant contributor to the global technology landscape.
Article Topics
AI fraud | biometrics | ID verification | identity verification | KYC | OCR Labs | onboarding | presentation attack detection | selfie biometrics | spoof detection







Comments