FB pixel

How AI fraudsters are capitalizing on the slow rollout of digital IDs

How AI fraudsters are capitalizing on the slow rollout of digital IDs
 

By Ofer Friedman, Chief Business Development Officer, AU10TIX

As professional fraudsters ramp up their attacks, leveraging generative AI and randomization tools, they are exploiting a narrow and closing window of opportunity. These advanced AI tools allow fraudsters to generate endless variations of fake identities, with no single fake being identical. However, this surge in AI-generated fraud isn’t just a result of better tools, but also the result of the slow and fragmented rollout of digital identities (Digital IDs) and reusable verifiable credentials (RVCs).

Once digital wallets and RVCs become the global standard, identity forgers will face enormous challenges. Breaking asymmetric encryption, which is used to secure these systems, is virtually impossible with today’s computing power—quantum computing may offer a solution, but even that is at least a decade away. So, how close are we to closing this window for fraudsters?

The promise of Digital IDs is quickly becoming a reality. Countries from the US and Australia to the European Union and even parts of Africa are launching pilots and operational programs for digital, encrypted forms of identity verification.

Yet, realizing widespread adoption of digital IDs still faces several hurdles, from interoperability between systems to broad acceptance by governments and service providers. Until these issues are fully addressed, fraudsters will continue to thrive.

Fragmented rollouts: A fraudster’s paradise

While Digital IDs hold the promise of seamless and secure identity verification, the global rollout is anything but streamlined or standardized. Different countries follow varied standards, and interoperability remains limited, even within individual regions.

For instance, in Australia, a mobile identity program is in place, but it is not expected to open until 2026, and isn’t expected to be fully functional until as late as 2030. Similarly, the U.S. lacks a unified federal system, with states and companies experimenting with digital wallets. This fragmented environment provides fraudsters with ample opportunities to exploit gaps in identity verification.

For digital IDs to work at scale, they must be usable across borders, follow universal standards, and offer broad acceptance. But until this happens, fraudsters have an advantage. As systems struggle to catch up with the latest digital trends, criminals are seizing this opportunity, armed with generative AI tools that can create sophisticated fake IDs. These aren’t just amateur scams; they are carefully planned, large-scale operations.

AI’s role in identity fraud

As organizations roll out digital transformation initiatives, many remain woefully unprepared for AI-driven fraud. Most do not yet have the necessary defenses to combat AI-generated fakes, leaving them vulnerable to exploitation. The technology to detect AI fraud is still in its infancy, and even where it exists, many organizations fail to adopt it.

To effectively combat AI fraud, businesses need a two-layer defense strategy:

  • Case-level detection: Fraud detection tools must identify fakes at the individual level, analyzing every ID document or selfie submission with extreme precision.
  • Traffic-level detection: Understanding the broader patterns of fraud, including repeated behavior across multiple cases, is essential in identifying organized fraud rings.

While these defenses are emerging, many organizations are not equipped with either level of protection, allowing AI-powered fraudsters to slip through the cracks.

NIST’s evolving guidelines and challenges ahead

The National Institute of Standards and Technology (NIST) recently published its second draft of Digital Identity Guidelines (SP 800-63 Revision 4). This update aims to balance security with accessibility, offering guidance on emerging technologies like digital wallets and biometric verification.

However, despite these advancements, gaps remain—especially around protections against social engineering and AI-enhanced fraud. For example, NIST’s updated guidelines emphasize phishing-resistant authentication and include requirements for protection against advanced social engineering attacks. Unfortunately, the current pace of adoption and the complexity of deploying these solutions at scale leaves organizations exposed. Until these guidelines are universally implemented, fraudsters will continue to exploit vulnerabilities.

The future of digital IDs: A race against time

As the adoption of digital IDs accelerates, there’s no doubt that they will revolutionize identity verification. But the path to widespread use is not without challenges. Government priorities, public debate, and legal hurdles all contribute to slowing down the adoption process.

For example, mobile driver’s licenses (mDLs) are already accepted in thirteen U.S. states, but significant infrastructure is still needed to make these digital IDs ubiquitous and secure enough to replace traditional identification methods.

Fraudsters are capitalizing on these delays. With deepfakes, identity fraud, and misinformation on the rise, the need for secure and universally accepted Digital IDs is becoming a critical issue for global security.

The challenge now is not just to protect data but to secure biometrics, as these are harder to steal and replicate. As a massive amount of personal information is already out in the wild due to data breaches, biometric and encrypted identity verification systems represent the future of fraud prevention.

In the battle against AI-powered fraud, the introduction of digital IDs will be a game changer. But until the global rollout is complete, organizations must invest in multi-layered AI-fraud detection systems. While governments and institutions work towards universal standards, fraudsters will continue to exploit the cracks. The stakes are higher than ever, and the time to act is now.

About the author

Ofer Friedman is chief business development officer for AU10TIX, the global technology leader in identity verification and ID management automation. He has 15 years experience in the identity verification and compliance technology sector, and has worked with household names such as PayPal, Google, Payoneer, Binance, eToro, Uber, Rapyd, and Saxo Bank. Ofer began his career in advertising/marketing, working for the BBDO and Leo Burnett agencies. Connect with him on LinkedIn.

Related Posts

Article Topics

 |   |   |   |   |   |   |   |   | 

Latest Biometrics News

 

Biometrics connecting ID and payments through digital wallets, apps and passkeys

Biometrics are connecting with payment credentials, whether through numberless credit cards and banking apps or passkeys, as the concrete steps…

 

Reach of Musk, DOGE’s federal data access sets off privacy, security alarms

Led by tech billionaire Elon Musk and a shadowy team believed to be under his control, the United States DOGE…

 

Mobile driver’s licenses on the cusp of ‘major paradigm shift’

More entities have integrated the California mobile driver’s license (mDL) credential for identity verification. Although just 15 states have introduced…

 

Gesture-based age estimation tool BorderAge joins Australia age assurance trial

Australia’s age assurance technology trial is testing the new biometric tool that performs age estimation based on hand gestures. The…

 

European AI compliance project CERTAIN launches

The pan-European project to create AI compliance tools CERTAIN has kicked off its work, with the goal of making European…

 

Signaturit Group acquiring Validated ID for undisclosed sum

Spain-based digital identity and electronic signature provider Validated ID is being acquired by Signaturit Group, a European company offering identity…

Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Most Viewed This Week

Featured Company

Biometrics Insight, Opinion

Digital ID In-Depth

Biometrics White Papers

Biometrics Events