Academic deepfake research paper suggests liveness detection vulnerable
Research into the methods, effectiveness and limits of biometric deepfake detection has been added to the growing body of work on the topic. It does not all neatly align, except on the premise that deepfakes are a potential threat to facial recognition and liveness detection systems.
A test of biometric liveness APIs indicates that they are not up to the task of detecting emerging and developing deepfakes, Unite.AI reports.
‘Seeing is Living? Rethinking the Security of Facial Liveness Verification in the Deepfake Era’ evaluates the effectiveness of ‘Facial Liveness Verification’ or ‘FLV’ biometrics services delivered through APIs, and finds that many are configured to detect legacy deepfake attack techniques, or are too dependent on a particular architecture. The study acknowledges significant differences in measures deployed by different vendors.
The ’LiveBugger’ deepfake attack framework was used to test liveness systems, and found that some are better able to detect high-quality synthesized videos than lower-quality ones (which are presumably more susceptible to human detection). The researchers incorporated six deepfake frameworks into the development of LiveBugger, which address four different attack vectors. The framework was applied to liveness systems utilizing single-images, video clips, prompted actions and prompted speech.
The analysis proceeds to show how bias in liveness detection technologies can be used to more effectively select targets to victimize. The researchers also explore other methods of improving attack effectiveness.
They present an overview of vulnerabilities to deepfakes found in liveness detection technologies, and conclude that biometric liveness detection systems should abandon the single-image approach in the future. They also recommend the use of deepfake detection for video clips, analysis of lip movements in prompted speech processes, and coherence detection in action-based liveness systems.
The vendors involved have confirmed the research findings, according to the academics conducting the study.
The research contrasts with that presented recently by BioID’s Ann-Kathrin Freiberg in an EAB lunch talk, which suggests that existing biometric liveness detection technologies like 3D motion and texture analysis are usually successful at identifying deepfakes as video replay attacks.
Methods for masks
A paper on ‘Deepfake Detection for Facial Images with Facemasks’, published by researchers from Sungkyunkwan University in South Korea
The South Korean research team says that deepfake detection methods have so far shown strong performance, but testing had not evaluated whether this effectiveness extends to systems used with masked faces. They address this lack by testing existing tools, and developing two of their own to help with the additional challenge of detecting deepfaked faces occluded by masks.
Four deepfake detection models were tested. In baseline testing, the best-performing model detected unmasked deepfakes from five datasets between two-thirds and nearly 97 percent of the time, but the accuracy rate dipped by nearly ten percent in each case, and over twenty percent for two of the datasets.
The two techniques developed by the researchers to improve deepfake detection are based on the alteration of existing training datasets to improve the models’ performance with occlusions. The ‘Face-crop’ method consists of images cropped just below the eyes, and improved performance more than the ‘Face-patch’ method, which placed digital rectangles over subjects’ eyes and noses.
Paper awarded
A research paper on the use of deepfake detection to prevent morphing presentation attacks against smart city facial recognition system won the Best Paper Award at the recent OITS/IEEE International Conference on Information Technology, according to an announcement by University of North Texas College of Engineering.
‘Detection of Deep-Morphed Deepfake Images to Make Robust Automatic Facial Recognition Systems’ was written by doctoral student Alakananda Mitra, Computer Science and Engineering Professor Saraju P. Mohanty and Electrical Engineering Professor Elias Kougianos of the UNT.
The method described in the paper showed deepfake detection accuracy of between 94.83 and 100 percent, as long as a low-quality image training dataset was not used to evaluate high-quality images.
Voice deepfakes for sale
For those who would like a deepfake avatar to speak on their behalf, Speech Morphing will prompt them to record hundreds of specific phrases to capture a range of sounds and emotions.
Speech Morphing Founder and CEO Fathy Yassa tells MindMatters.ai that alternatively the company can build a voice deepfake by extracting 10 to 15 minutes of audio recordings from the net.
Article Topics
biometric liveness detection | biometrics | biometrics research | deepfake detection | deepfakes | face biometrics | face morphing | presentation attack detection
Comments