Face morphing threat to biometric identity credentials’ trustworthiness a growing problem
Face morphing poses a significant threat to contemporary face biometrics, NIST Computer Scientist and International Face Performance Conference (IFPC) 2020 organizer Mei Ngan said during a presentation.
Morphing is not difficult to do, and commercial tools are available to help those who wish to perform it, Ngan points out. Some of these regularly produce artefacts in the images which give them away, but they can be reduced or eliminated with manual post-processing. Morphing faces with generative adversarial networks (GANs) tends not to produce artifacts the same way as landmark-based processes.
“Morphing essentially poses a threat to entities that accept any sort of user-submitted photos for identity credentials,” Ngan explains, referencing the ‘magic passport’ concept elucidated by University of Bologna researchers in 2014, in which one passport can be used by two similar-looking accomplices.
Real incidents of morphing-based ID fraud have been observed in the wild, including a German activist who was reported in September 2018 to have received a German passport with a photo of his face morphed with that of an Italian politician. At a conference last year, a poll of attendees found further evidence of the proliferation of morphed images.
In response, NIST launched its Ongoing FRVT Morph evaluation was launched in 2018.
The problem has continued to be the subject of increasing attention, with Germany enacting legislation to prevent uncontrolled or analog images, which may be difficult to detect morphing on, from being submitted for passport photos earlier this year.
Ngan also differentiates deepfakes from morphing by noting that morphs are intended to defeat facial recognition systems, whereas deepfakes are crafted (sometimes with similar or the same technology) to fool people about an event.
NIST compared the performance of facial recognition with both regular and morphed photos, and found that in general, the more accurate an algorithm is, the more likely it is to accept a morphed image. The algorithms most likely to reject the morph as a different person are most likely to reject true positive matches.
“I think its fair to say at the moment that the more accurate the algorithm, the more vulnerable it is to morphing attacks,” Ngan concludes.
NIST testing has evaluated algorithm performance against 13 datasets of morphed images, with algorithms that performed well against less sophisticated morphs tested against increasingly challenging sets of images. Workflows for unsupervised capture and for the authentication phase are tested based on existing presentation attack detection standards.
No commercial algorithms claimed to be effective for detecting morphs have been specifically submitted to the Morph testing track, so far.
At the same time, effectiveness at morph detection must not come at the expense of recognition accuracy.
“We don’t want the system owners turning the capability off,” Ngan explains. “So the question is: How low do we have to go for morph detection capabilities to be useful in operations?”
That means less than a 1 percent false detection rate in operations, according to another previous poll.
A couple of algorithms have shown some potential in recent NIST testing, Ngan says, but at common operational requirements none are effective at detecting morphed images. Even for lower-quality morphs that could be detected by human observers, some algorithms are not effective.
Higher resolution has been hypothesized as a possible fix to the problem, but only improved results in some algorithms, and in those cases with diminishing returns. Other mitigation ideas are being considered, such as running one-to-many searches to look for multiple candidates with suspiciously high similarity scores, but live enrollment is the one sure image-manipulation prevention method available today. That would also not deal with morphs already in the system, Ngan observes.
Raising awareness, even within the industry, is one of the next steps, and in 180 audience responses to a poll, 10 percent said they were previously unaware of the morphing problem.
When asked how many morphed images their country or organization has detected over the past five years, 61 percent said between 1 and 10. Fifteen percent of respondents report having seen more than 500 cases of morphing, however, and another 10 percent say they have seen between 51 and 500.
Accordingly, the next three IFPC 2020 presentations dealt with related topics, including human performance on morph detection and related research projects funded by the Department of Homeland Security’s Science and Technology Directorate (S&T).
Christoph Busch of the Norwegian University of Science and Technology (NTNU), Hochschule Darmstadt (HDA) and the EAB, who moderated the first half of the day’s proceedings, presented on the iMARS project, while Kiran Raja of NTNU, and researchers from Clarkson University’s Center for Identification Technology Research (CITeR) also addressed the topic in a the day’s second half, which was moderated by Ngan.
IFPC 2020 concludes Thursday, October 29.
Article Topics
biometric identification | biometric passport | biometrics | credentials | face morphing | face photo | facial recognition | fraud prevention | identity document | iMARS | International Face Performance Conference (IFPC) | morphing attack | NIST | spoofing
Advanced liveness detection has solved this problem for where it matters the most: user authentication. For people in the public eye, this is an entirely different issue. To see this as a growing threat with no real solution in sight is simply baffling. What is it about this industry that doesn’t allow it to make progress? Doesn’t anyone do their homework before going through the trouble of setting up something like this?