Synthetic identity blends real and fake data to enable fraud, demanding new protections

Faces are everywhere, stored and displayed in the millions on social media platforms. Generative AI technologies have enabled the large-scale harvesting and manipulation of face biometrics, and given us the new threat of synthetic identity fraud.
PXL Vision and the Biometrics Security and Privacy group at Idiap Research Institute have partnered to develop a “robust AI-based counterfeit detection solution,” according to a post on LinkedIn. A synthetic ID primer from PXL Vision explains that perpetrators of synthetic identity fraud use AI to “create a completely new identity from a mixture of stolen, manipulated and fictitious information.”
Manufactured synthetic identities merge and blend real identity details from different stolen identities. A real ID number might be paired with a fake name or address and linked to a deepfaked image that lines up with the hacked identity data. Manipulated synthetic identities are real identities that alter an existing identity document.
The widespread shift toward digital identity verification and authentication processes, as illustrated by the EUDI Wallet scheme, brings new risks: “the transition to digital identity opens up new areas of attack – precisely because AI-supported fraud scams are likely to become increasingly sophisticated in the future.”
PXL Vision uses near field communication (NFC) and liveness checks to recognize and prevent fraud attempts. Moreover, “another key component is video injection detection, which identifies manipulated or artificially generated videos for deception. This is done by analysing metadata, movement patterns and digital artefacts that may indicate manipulation.”
The deepfake project with Idiap is supported by the Swiss innovation promotion agency Innosuisse.
Yoti Liveness, injection detection defends against direct and indirect attack vectors
A new white paper from Yoti delves into the threat of Generative AI. “The rate of development of generative AI presents a problem to not just ensuring a person is who they say they are, but also to content platforms who need to be sure that the content added by a user is genuine,” says the paper. “Given the potential risks and challenges in detecting generative AI, Yoti’s strategy emphasises early detection at the source, addressing both direct and indirect attack vectors.”
While presentation attacks (PAD) are a “relatively mature and well understood issue across the verification space,” well defended by effective liveness detection, more recently popularized injection attacks attempt to bypass liveness detection by hacking directly into a hardware device or virtual camera.
Yoti says the latest version of its MyFace SICAP (Secure Image CAPture), “a new way of adding security at the point an image is being taken for a liveness or facematch check,” is able to detect both hardware and software attacks.
Report from iProov highlights scale of identity attack arsenal
The recently released Threat Intelligence Report from iProov highlights the “skyrocketing increase in Native Virtual Camera and Face Swap attacks.”
“Native Virtual Camera attacks have become the primary threat vector, increasing by 2665 percent due partly to mainstream app store infiltration,” says the report. “Face Swap attacks surged 300 percent compared to 2023, with threat actors shifting focus to systems using liveness detection protocols.”
The company also issues a bit of warning on providers: “When a vendor claims to offer ‘complete deepfake protection,’ it is critical to inquire about which of the 115,000 known attack combinations they have tested,” it says. “We have documented 127 face swap tools, 91 virtual cameras, and 10 emulators – each of which creates distinct attack vectors.”
Article Topics
biometric liveness detection | biometrics | digital identity | face biometrics | generative AI | identity verification | Idiap | iProov | PXL Vision | synthetic data | synthetic identity fraud | Yoti
Comments