FB pixel

Fraud accompli: synthetic identities, injection attacks change security landscape

Ease of access to tech for face swaps, deepfakes necessitates layered IDV
Fraud accompli: synthetic identities, injection attacks change security landscape
 

The annual Identity Verification Threat Report from iProov has arrived bearing a potent message: things have changed. Whether we’re through the looking glass, across the Rubicon, past the point of no return or over the line in the sand, the rapid technological development and increased availability of technologies used in fraud and biometric identity crime have made legacy security and ID verification systems obsolete.

“In 2014, creating synthetic identities required extensive technical expertise, specialized equipment and significant time investment,” says iProov Chief Scientific Officer Dr. Andrew Newell. But in 2025, there is an easily accessible marketplace for tools and services that bad actors with minimal technical expertise can use to generate high-quality synthetic media in real time. Deepfakes, of both the video and audio variety, have been commodified and commercialized.

Yet while machine learning, generative adversarial networks (GANs), deep learning, face swaps and other AI tools are enabling the explosion in deepfake identity fraud, it’s not just the tech that’s changing. The stereotypes of the hoodie hacker and the hostile foreign power are in reality dwarfed by new fraud-as-a-service networks that offer customers outsourced fraud in the form of a simple subscription. These criminal service networks, which have been linked to human rights abuses in Southeast Asia, approach fraud as a cruel but lucrative business model, and are driving attacks on an unprecedented scale.

“As the rapid proliferation of offensive tools continues to accelerate, security measures are struggling to keep up,” says Dr. Newell. “We are moving to a world where the authenticity of digital media is becoming impossible to establish by the human eye, making this a problem not just for traditional targets but for any organization or individual that relies upon the authenticity of digital media to establish trust.”

Injection Attacks, face swaps target virtual cameras

The report draws from data collated by iProov’s Security Operations Center (iSOC), which collects and analyzes threats targeting remote verification systems, including dark web monitoring, pattern analysis of detected and prevented attacks and technical evaluation of attack tools and methodologies. Its most alarming finding is what it calls “a skyrocketing increase in native virtual camera and face swap attacks.”

Biometric injection attacks, in which falsified media is “injected” into a feed, are hijacking cameras at an astonishing rate. Per the report, native virtual camera attacks have become the primary threat vector, increasing by 2665 percent due in part to mainstream app store infiltration.

Face swap attacks increased 300 percent from 2023, “with threat actors shifting focus to systems using liveness detection protocols.” The online crime-as-a-service ecosystem now boasts some 24,000 entities selling attack technologies that make previously complex operations like image-to-video conversion as easy as clicking a few buttons. And some use malicious sleeper code that remains dormant for an extended period before activating.

The broad implications of the report are troubling, indeed, in that they show not only technical agility on the part of fraudsters, but also more structured organization and strategic aggression. The report points to a shift toward long-term fraud strategies that purposefully target fraud prevention technologies like liveness detection, and cross-pollinate to offer fraudsters thousands of potential attack combinations.

“Relying on outdated security measures is like leaving the front door open to fraudsters,” says Dr. Newell. The need for layered, robust identity verification to secure system access and high-value transactions has never been greater.

Biometric security enters fraudsters’ crosshairs

In a blog for Interface, Chief Product Officer at ISMS.online Sam Peters echoes the concern expressed in iProov’s report that fraudsters are no longer just targeting people or businesses, but also the tools designed to thwart them.

Biometric security measures like facial recognition and fingerprinting have been widely adopted – which, Peters writes, has turned them into prime targets for attackers. “The threat demands our attention as, unlike passwords which can be changed, compromised biometric data is permanent, amplifying the risks associated with its theft.”

While some of Peters’ concern focuses on wearable tech that can be hacked for personal info, he also rings the bell on deepfakes created with increasingly sophisticated deep learning and image generation models, noting that “the implications of deepfake technology extend beyond financial fraud, potentially undermining biometric authentication systems altogether.”

“By combining deepfake technology with stolen biometric data, attackers can craft highly convincing scams, leaving both individuals and enterprises vulnerable.”

The recipe for security, Peters says, involves a mix of regulation (and compliance), and layered security that augments and supports biometric authentication, including multi-factor authentication (MFA) and liveness detection.

And it requires efforts on all sides. “Device manufacturers must prioritise security features in their products, incorporating measures like end-to-end encryption and data minimisation practices – key principles of GDPR.” He notes that established standards and frameworks such as ISO/IEC 27001 can “offer clear strategies for identifying reliable suppliers and enhancing authentication practices.”

Complex signal analysis detects fraud with Massachusetts identities: Socure

Socure has published a case study on how it “identified and stopped a surge in fraudulent activity targeting the retail banking and credit card operations of large financial institutions with stolen Massachusetts identities.”

When a suspicious number of applications started coming from purported residents of Massachusetts born between 1975 and 1990, Socure looked at patterns across four key areas indicating a concerted fraud effort.

Many were identified with specific domains using gibberish email handles with correlation to identities. In particular, the luuinet.com email domain has been associated with 5,500 applications tied to Massachusetts identities.

Increased volumes of applications from Massachusetts come in the middle of the night. IP addresses come from across the U.S., strongly suggesting the use of VPNs or proxy services. “Notably,” Socure says, “over 89 percent of flagged applications came from geolocations that were more than 100 miles away from the declared address.” And most of the phone numbers associated with applications were flagged for limited activity.

“The exclusive use of Massachusetts identities in this attack strongly suggests that a data breach is at the heart of this effort. The use of gibberish email handles indicate automated generation.”

The firm says it has “strong evidence to believe that a China-based actor is behind this attack.”

“It turns out that luuinet.com is a domain that was registered in China in 2023.” And in comparing spikes in Massachusetts applications, Socure’s data shows that “the spikes in volume match closely to the working day hours in China.”

The major takeaway? “Modern fraud prevention must leverage advanced AI-driven solutions that can detect nuanced patterns, anomalies and synthetic identity elements in real time.”

Frankenstein fraud lumbers onto the scene with doctored identities

Just when you thought fraud couldn’t get any more frightening, Thompson Reuters has a white paper on what it calls “Frankenstein fraud” – synthetic identity fraud in which parts of different identities are stitched together into a fraudulent composite.

A blog summary says “synthetic ID fraud creates fabricated identities using names, addresses, birthdates and Social Security Numbers (SSNs) from different people. Once assembled, the fabricated ID can be used to apply for bank accounts, credit, loans and government benefits.”

“Anyone’s information can be used to cobble together synthetic identities – children, the elderly, and people with unstable housing are often targeted because these individuals are less likely to regularly access credit file reports and detect suspicious activity.”

Ultimately, says the report, any data collected in biometric authentication systems must be secured, and “agency leaders need to ask themselves whether their agency is working with third-party vendors that have a record of good practices regarding client data security.” Because if Frankenstein fraudsters get their hands on biometric data, they might be of a mind to make something monstrous.

The full white paper is available to download here.

Related Posts

Article Topics

 |   |   |   |   |   |   |   |   | 

Latest Biometrics News

 

Sweden issues RFI for new ABIS, Moldova issues biometric hardware tender

Sweden is considering purchasing a new biometric system that will help the country collect fingerprints and facial images of asylum…

 

Email service Kivra acquires digital ID firm Truid

Nordic email service Kivra, which handles official communication between citizens, companies and government agencies, has taken a step towards developing…

 

Identity verification, fraud prevention benefit from boom in real-time payments

On a classic episode of The Simpsons, when Homer is shown a deep fryer that can “flash fry a buffalo…

 

Rise of digital wallets integrating payments and digital identities across Asia

Digital wallets have grown from innovation to an essential financial instrument, easily integrating into billions of people’s daily activities. By…

 

Facephi touts ‘exceptional results’ on RIVTD face liveness detection test

Facephi is celebrating an “outstanding score” in the Remote Identity Validation Technology Demonstration (RIVTD) Track 3 test for Face Liveness…

 

InverID expands certification package with ETSI 119 461 compliance

Inverid’s NFC-based identity verification product ReadID now complies with applicable requirements of the ETSI 119 461 standard for unattended remote…

Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Most Viewed This Week

Featured Company

Biometrics Insight, Opinion

Digital ID In-Depth

Biometrics White Papers

Biometrics Events