Biometric ID firms face the music on growing threat of generative AI, says iProov
The latest threat intelligence report from iProov addresses the algorithmic elephant in the room, as implied by its subtitle: “The Impact of Generative AI on Remote Identity Verification.” Specifically, the report zooms in on tools and techniques that threat actors use to launch digital injection attacks that pose a risk to secure digital identity verification.
“In the last 24 months,” reads the report, “the threat landscape has undergone significant changes. Organizations considering incorporating facial biometrics into their remote identity platforms need to understand the benefits and drawbacks of the various technologies available and the pros and cons of different deployment methods.” Biometric solutions that looked secure two years ago may not instill the same confidence in a world of deepfakes, face swaps, voice cloning and whatever mass uptake of Apple Vision Pro might look like.
Notable statistics in the report include an observed increase in face swap injection attacks of a whopping 704 percent from the first to second half of 2023. “Face swaps,” it says, “are now firmly established as the deepfake of choice among persistent threat actors.” The most common tools being used for face swap attacks are SwapFace, DeepFaceLive and Swapstream. Most easily accessible options include a free tier for user experimentation – or, in the case of fraudsters, exploitation.
Synthetic media created using generative AI tools has become harder to detect and can be injected as a malicious payload into an audio or video feed and manipulated in real time; security methods that would typically detect if virtual cameras were used to film synthetic faces can now be fooled with emulators, the use of which increased by 353 percent from H1 to H2 2023. Injection attacks on mobile platforms shot up by 255 percent in the same period. And the variety of tactics, the volume of bad actors and a “significant increase in the persistence of threat actors” mean the threat ecosystem is growing with pestilent speed; among threat groups identified by iProov’s analysts, 47 were created in 2023.
This all sounds bad. But the outlook is not as bleak as it would seem: those who are equipped with the appropriate technological safeguards have less to worry about. “Organizations leveraging biometric verification technology are in a stronger position to detect and defend against these attacks than those relying solely on manual operation,” says the report. Its key takeaways confidently state that “human operator-led systems can no longer consistently and correctly detect synthetic media such as deepfakes,” and that “in order to detect synthetic media created using generative AI, verification technologies that leverage AI are essential.”
In addition, iProov calls on the biometrics industry to establish more rigorous certification requirements for vendors, and to factor in both user experience and potential bias, as well as steps to mitigate it.
“Threat actors are exploiting processes that rely on lower-cost technology as well as those that leverage human intervention,” says the report in its conclusion. “Current tools are outpacing defenses in both availability and sophistication. As a result, these new threat vectors are evading many current remote identity verification techniques faster than organizations can detect or adapt their security measures.”
“A proactive approach leveraging science is needed to identify, mitigate, and prevent potential threats before they become serious.”
The full report is available for download here.
Article Topics
biometric liveness detection | deepfakes | face biometrics | face swap | generative AI | injection attacks | iProov
Comments