Morphing attack detection for face biometric spoofs needs more generalization, datasets
Deepfakes may get more press, but face morphing could be the more pressing security concern for border control systems using biometrics to verify the identities of travelers, attendees of the European Association for Biometrics’ latest workshop heard.
The EAB Workshop on Digital Face Manipulation & Detection also focused on matching audio to video in face generation, the ethics and social implications of digital face manipulation, and deepfakes. Other presentation topics included detection of synthetic faces, adversarial attacks, and photorealistic face editing.
A trio of presentations specifically focused on the threat of face morphing in identity documents, and what can be done to detect it.
A real-world problem
Annalisa Franco of the University of Bologna and iMARS describes morphing as a combination of warping and texture blending applied to facial landmarks, or the product of generative adversarial networks (GANs).
She described how AI is used to generate double-identity face images, which can be enrolled on a passport chip to allow two people to pass biometric checks against the same ID document. This has been identified as one of the most serious threats to biometric security systems, such as for border control.
Real life examples have been observed, starting in 2018. Slovenian Police reported in 2021 that they had observed more than 40 cases of morphing, which were provided as part of a professional service to issue Slovenian passports to Albanians to allow them to travel to Canada.
Franco shared examples of the two approaches to face morphing, and discussed their effectiveness and limitations.
Despite these limitations, facial recognition systems like ABC gates are vulnerable to morphing attacks, as indicated by two different evaluation metrics proposed. One of these techniques considers attacks successful if the morphed image can be matched with any probe image, while the other considers ‘fully mated morph presentation match rate’ (FMMPMR), or the success of all probe images of both subjects.
Franco proposes a new metric, known as Morphing Attack Potential (MAP), which is based on the perspective of the criminal, and considered variable numbers of probe images and multiple facial recognition systems.
A test with this metric showed that more than a quarter of morphed images fooled all four facial recognition systems with all probe images, while 85 percent of morphed images fooled at least one system on a single probe image.
Morphing detection methods
Christophe Busch of NTNU and HAD reviewed morphing attack detection, noting the somewhat counterintuitive finding that more accurate facial recognition systems tend to be more vulnerable to morphing attacks than less accurate ones. The superior tolerance that makes some systems more accurate, Busch explains, also makes them vulnerable to this particular kind of attack.
Morph attack detection with texture analysis through image descriptors using local binary patterns shows promise, but generalization with LBPs is difficult, according to Busch. Photo-response non-uniformity can reveal images captured by two different cameras, revealing the morph in histograms.
Fortunately, catching morphing attacks does not typically involve analysis of a single image, but comparison of two images. This allows differential morphing attack detection from differences in feature angles and distances.
Another differential image method suggested in 2018 by Franco and two fellow researchers also sets out the possibility of “demorphing” by inverting the morph process and then confirming a similarity score.
The attack detection metrics set out in ISO/IEC 30107-3 can be used for morphing attacks, Busch says, but performing the evaluations is highly complex.
Both the EU and U.S. NIST are working on morphing attack detection evaluation.
Practical evaluations of detection methods were further described by Luuk Spreeuwers of the University of Twente, who pointed out the various kinds of traces that morphing leaves on images.
The equal error rate of morph detection systems reported in academic papers tend to be around 2 percent, but because they are based on a single morphing method and a single database, Spreeuwers cautions that those results cannot be generalized. Cross-database tests, he says, are needed for insight into their practical effectiveness.
Databases with images morphed using different methods, therefore, are needed. A system trained and tested on combined datasets delivered a 35 percent EER, however.
Spreeuwers’ team built a dataset from 4 existing datasets to create databases for algorithm training and testing. They then used a texture-based approach, as one reported to work well, with an SVM classifier used in feature extraction.
The researchers found that morphing detection can be thwarted by injecting small amounts of gaussian noise, or downscaling and then upscaling images. Both methods disturb the traces left behind by the morph generation process.
Subjects for morph generation should also be chosen based on similarity, since this is what criminals do, Spreeuwers points out.
Like Busch, Spreeuwers concludes by directing his audience to the State of the Art Morphing Attack Detection (SOTAMD) project run by the EU to set up a database for testing morphing attack detection systems.