FB pixel

NIST unmasks results of presentation attack detection and image defect software testing

NIST unmasks results of presentation attack detection and image defect software testing

Two publications from the U.S. National Institute of Standards and Technology (NIST) shed light on the current state of software designed to detect facial recognition spoof attacks and facial image defects.

The two publications are the first on the subject to appear since NIST divided its Face Recognition Vendor Test (FRVT) program into two tracks: Face Recognition Technology Evaluation (FRTE) and Face Analysis Technology Evaluation (FATE).

Face analysis is distinct from facial recognition in that facial recognition aims to identify a person based on an image. In contrast, face analysis is concerned with flagging problematic images due to issues with the photo’s capture.

“Can a given software algorithm tell you whether there’s something wrong with a face image?” asks Mei Ngan, a NIST computer scientist. “For example, are the person’s eyes closed? Is the image blurry? Is the image actually a mask that looks like another person’s face? These are the sort of defects that some developers claim their software can detect, and the FATE track is concerned with evaluating these claims.”

Ngan is the author of the study, “Face Analysis Technology Evaluation (FATE) Part 10: Performance of Passive, Software-Based Presentation Attack Detection (PAD) Algorithms,” which evaluates the ability of face analysis algorithms to detect whether these issues constituted evidence of a biometric spoofing attack. The research team evaluated 82 software algorithms submitted voluntarily by 45 unique developers. The researchers challenged the software with two different scenarios: impersonation (trying to look like someone else) and evasion (trying to avoid looking like oneself). Passive PAD software does not require the user to take a specific action, in contrast to active PAD.

The team evaluated the algorithms with nine types of biometric presentation attacks, with examples including a person wearing a sophisticated mask designed to mimic another person’s face, holding a photo of another person up to the camera, or wearing an N95 mask that hides some of the wearer’s face.

The results varied widely among PAD algorithms, but Ngan noted one caveat: “Only a small percentage of developers could realistically claim to detect certain presentation attacks using software,” she said. “Some developers’ algorithms could catch two or three types, but none caught them all.”

Participating vendors include ROC.ai, ID R&D, iProov, Aware, Neurotechnology, Cyberlink and Onfido.

One interesting finding was that the top-performing PAD algorithms worked better together.

“We asked if it would lower the error rate if you combined the results from different algorithms. It turns out that can be a good idea,” Ngan said. “When we chose four of the top performing algorithms on the impersonation test and fused their results, we found the group did better than any one of them alone.”

The FATE track’s second study, “Face Analysis Technology Evaluation (FATE) Part 11: Face Image Quality Vector Assessment: Specific Image Defect Detection” focused on image defect detection, like determining whether a passport photo might be rejected.

“If you walk into a drugstore and get a passport photo, you want to make sure your application won’t be rejected because there is an issue with the photo,” says study author Joyce Yang, a NIST mathematician. “Blurry photos are an obvious problem, but there can also be issues with backlighting or simply wearing glasses. We explored algorithms created to flag issues that make a photo noncompliant with passport requirements.”

The evaluation was the first in the FATE track, and the NIST team received seven algorithms from five developers. The study evaluated the algorithms on 20 quality measures, such as underexposure and background uniformity, all based on internationally accepted passport standards.

Yang said that all the algorithms showed mixed results. Each had its strengths, doing better on some of the 20 measures than others. These findings will guide the development of the ISO/IEC 29794-5 standard, specifying algorithm quality checks.

Article Topics

 |   |   |   |   |   |   |   | 

Latest Biometrics News


UK train stations trial Amazon emotion recognition on passengers

Amazon-powered AI cameras are now being used to monitor and analyze passengers’ emotions by employing a combination of “smart” CCTV…


IDloop launches 3D contactless fingerprint biometrics scanner

Germany-based biometric security company IDloop has introduced the CFS flats, an optical contactless fingerprint scanner that incorporates 3D imaging. This…


Clearview, Axess, 3DiVi, Next, Facephi target international growth

Several biometrics providers with established footholds have struck deals to expand into new geographies, while an access control multinational is…


House committee wants to check TSA’s digital identity homework

The U.S. House Homeland Security Committee has advanced legislation that would require the Transportation Security Authority (TSA) to produce a…


Vermont governor rejects privacy law that would be among strongest in US

The governor of Vermont says the state’s proposed data protection law is too risky, and has returned H.121, “An act…


Meta scores biometric privacy lawsuit win over Facebook non-users

A BIPA class action lawsuit against social media giant Meta has been quashed. The suit alleged that Meta Platforms Inc….


Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Most Read This Week

Featured Company

Biometrics Insight, Opinion

Digital ID In-Depth

Biometrics White Papers

Biometrics Events