FB pixel

Biometric ID firms face the music on growing threat of generative AI, says iProov

Threat report shows a massive spike in face swap injection attacks in 2023
Biometric ID firms face the music on growing threat of generative AI, says iProov
 

The latest threat intelligence report from iProov addresses the algorithmic elephant in the room, as implied by its subtitle: “The Impact of Generative AI on Remote Identity Verification.” Specifically, the report zooms in on tools and techniques that threat actors use to launch digital injection attacks that pose a risk to secure digital identity verification.

“In the last 24 months,” reads the report, “the threat landscape has undergone significant changes. Organizations considering incorporating facial biometrics into their remote identity platforms need to understand the benefits and drawbacks of the various technologies available and the pros and cons of different deployment methods.” Biometric solutions that looked secure two years ago may not instill the same confidence in a world of deepfakes, face swaps, voice cloning and whatever mass uptake of Apple Vision Pro might look like.

Notable statistics in the report include an observed increase in face swap injection attacks of a whopping 704 percent from the first to second half of 2023. “Face swaps,” it says, “are now firmly established as the deepfake of choice among persistent threat actors.” The most common tools being used for face swap attacks are SwapFace, DeepFaceLive and Swapstream. Most easily accessible options include a free tier for user experimentation – or, in the case of fraudsters, exploitation.

Synthetic media created using generative AI tools has become harder to detect and can be injected as a malicious payload into an audio or video feed and manipulated in real time; security methods that would typically detect if virtual cameras were used to film synthetic faces can now be fooled with emulators, the use of which increased by 353 percent from H1 to H2 2023. Injection attacks on mobile platforms shot up by 255 percent in the same period. And the variety of tactics, the volume of bad actors and a “significant increase in the persistence of threat actors” mean the threat ecosystem is growing with pestilent speed; among threat groups identified by iProov’s analysts, 47 were created in 2023.

This all sounds bad. But the outlook is not as bleak as it would seem: those who are equipped with the appropriate technological safeguards have less to worry about. “Organizations leveraging biometric verification technology are in a stronger position to detect and defend against these attacks than those relying solely on manual operation,” says the report. Its key takeaways confidently state that “human operator-led systems can no longer consistently and correctly detect synthetic media such as deepfakes,” and that “in order to detect synthetic media created using generative AI, verification technologies that leverage AI are essential.”

In addition, iProov calls on the biometrics industry to establish more rigorous certification requirements for vendors, and to factor in both user experience and potential bias, as well as steps to mitigate it.

“Threat actors are exploiting processes that rely on lower-cost technology as well as those that leverage human intervention,” says the report in its conclusion. “Current tools are outpacing defenses in both availability and sophistication. As a result, these new threat vectors are evading many current remote identity verification techniques faster than organizations can detect or adapt their security measures.”

“A proactive approach leveraging science is needed to identify, mitigate, and prevent potential threats before they become serious.”

The full report is available for download here.

Related Posts

Article Topics

 |   |   |   |   |   | 

Latest Biometrics News

 

Biometric Update Podcast digs into deepfakes with Pindrop CEO

Deepfakes are one of the biggest issues of our age. But while video deepfakes get the most attention, audio deepfakes…

 

Know your geography for successful digital ID adoption: Trinsic

A big year for digital identity issuance, adoption and regulation has widened the opportunities for businesses around the world to…

 

UK’s digital ID trust problem now between business and government

It used to be that the UK public’s trust in the government was a barrier to the establishment of a…

 

Super-recognizers can’t help with deepfakes, but deepfakes can help with algorithms

Deepfake faces are beyond even the ability of super-recognizers to identify consistently, with some sobering implications, but also a few…

 

Age assurance regulations push sites to weigh risks and explore options for compliance

Online age assurance laws have taken effect in certain jurisdictions, prompting platforms to look carefully at what they’re liable for…

 

The future of DARPA’s quantum benchmarking initiative

DARPA started the Quantum Benchmarking Initiative (QBI) in July 2024 to expand hardware capabilities and accelerate research. In April 2025,…

Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Market Analysis

Most Viewed This Week

Featured Company

Biometrics Insight, Opinion

Digital ID In-Depth

Biometrics White Papers

Biometrics Events