BioID shares encouraging research on deepfakes and biometric liveness detection with EAB
Deepfake images and videos pose a significant threat to biometric systems used for remote identity verification, and existing liveness technologies can detect them, making an attack vector for non-deepfakes a vulnerability businesses need to be aware of.
‘Why Deepfakes aren’t the Real Challenge for Remote Biometrics’ was presented by Ann-Kathrin Freiberg of BioID in the latest lunch talk presented by the European Association for Biometrics (EAB).
More than 250 attendees from more than 40 countries around the world pre-registered for the presentation, many of whom were highly engaged in discussion throughout.
The origin of the term based on the use of deep learning to manipulate or fake an image, video or audio file was reviewed, and Freiberg shared several examples of deepfakes, including a morph fake created by a BioID employee from a free app and a single image found on the internet.
Some basic tips for spotting deepfake videos were shared, such as observing the transition between different areas of the face and head, and frequency or lack of blinking. Symmetry, such as in eye color or earrings, can also reveal a deepfake.
A BioID researcher reviewed the publicly available apps for creating deepfakes, and found that four are basically effective. Their capability, Freiberg warns, though far from perfect, are easily adequate to fool many people who are not aware of the concept of deepfakes, and unsuspicious of media content they see.
A deep dive into deepfakes
Freiberg provided a breakdown of how deepfakes are made, their history, and real-world implications.
Variational auto-encoders are trained with the facial features of a person, using source materials, she explains, with more and better input quality resulting in better performance, just as in training other biometric systems and algorithms. Swapping the encoder and decoder of two people can transfer the features of one onto the other’s face.
The first research into deepfakes, described at the time as “synthetic animation of faces,” was published in 1997, but the field took off with the emergence of generative adversarial networks (GANs) in 2014, which allow for the creation of convincing deepfakes. High-end hardware and professional efforts were creating fairly convincing deepfakes by 2016, and the term “deepfake” emerged from Reddit forum discussing pornographic videos. The topic entered into public discussion of social problems like fake news in 2018, and apps to create deepfakes from a single image followed in 2021.
In practice, 96 percent of deepfake content is pornography, Freiberg revealed. The dangers so far have largely been seen therefore on the personal level, though there is also risk of politicized misinformation, as people tend to be less critical in accepting the veracity of information which aligns with beliefs they already hold.
There are potential benefits of deepfakes, as well, however, for fun and entertainment, but also for more practical concerns. Freiberg notes a deepfake video conveying health information from celebrity Brad Pitt in languages which the actor does not actually speak as an example of this kind of deepfake application with potential social benefit.
Biometric identity verification implications
The rise of deepfakes in the context of increasing adoption of digital identity verification, including with video agent chats and selfie biometrics, makes biometric liveness detection even more important.
Freiberg asks if biometric systems can detect deepfakes, and provides the troubling answer: facial recognition cannot always.
Reviewing the ISO/IEC 30107-3 presentation attack detection (PAD) standard, Freiberg notes that deepfakes are not mentioned among attacks covered.
Techniques used by PAD systems, such as 3D motion and texture analysis, can detect deepfakes, however. This is because many are essentially video replay attacks, which are covered by the PAD standard.
“Everything is fine, right? Presentation’s over and we can go home? No, it’s actually not like that. There’s more than one attack vector when it comes to remote identity verification,” Freiberg warns. The reason is that other attack vectors, namely application level attacks, in which virtual cameras inject videos directly into the application, rather than presenting it to the physical camera.
Though challenge response technology can help with pre-recorded deepfake injections, Freiberg says live manipulated deepfakes remain highly difficult to detect. Fortunately, they are also difficult to pull off.
Blinking detection, image forensics and occlusion detection can be used to identify deepfakes, and AI algorithms can analyze artefacts to reveal digital manipulation.
“If a machine makes something, then a machine would normally be able to detect the traces,” Freiberg explains. “That’s the good news.”
BioID is part of a consortium of German researchers from industry, academia and government developing methods to detect deepfakes.
Virtual camera attacks must be addressed separately, and present a legitimate threat to biometric identity verification systems, however. Preventing them can be achieved through native apps, blacklisting of virtual cameras’ drivers, challenge response techniques, and randomized image capturing.
Combine native apps with high-end PAD software, Freiberg says, to give systems the best protection currently available.
This post was updated at 11:15am Eastern on February 28, 2022 to clarify that the risk of injection attacks is distinct from deepfakes.
Article Topics
BioID | biometric liveness detection | biometrics research | deepfake detection | deepfakes | EAB | European Association for Biometrics | face biometrics | identity verification | injection attacks | ISO/IEC 30107-3 | presentation attack detection | remote verification
Comments