FB pixel

NIST unmasks results of presentation attack detection and image defect software testing

NIST unmasks results of presentation attack detection and image defect software testing
 

Two publications from the U.S. National Institute of Standards and Technology (NIST) shed light on the current state of software designed to detect facial recognition spoof attacks and facial image defects.

The two publications are the first on the subject to appear since NIST divided its Face Recognition Vendor Test (FRVT) program into two tracks: Face Recognition Technology Evaluation (FRTE) and Face Analysis Technology Evaluation (FATE).

Face analysis is distinct from facial recognition in that facial recognition aims to identify a person based on an image. In contrast, face analysis is concerned with flagging problematic images due to issues with the photo’s capture.

“Can a given software algorithm tell you whether there’s something wrong with a face image?” asks Mei Ngan, a NIST computer scientist. “For example, are the person’s eyes closed? Is the image blurry? Is the image actually a mask that looks like another person’s face? These are the sort of defects that some developers claim their software can detect, and the FATE track is concerned with evaluating these claims.”

Ngan is the author of the study, “Face Analysis Technology Evaluation (FATE) Part 10: Performance of Passive, Software-Based Presentation Attack Detection (PAD) Algorithms,” which evaluates the ability of face analysis algorithms to detect whether these issues constituted evidence of a biometric spoofing attack. The research team evaluated 82 software algorithms submitted voluntarily by 45 unique developers. The researchers challenged the software with two different scenarios: impersonation (trying to look like someone else) and evasion (trying to avoid looking like oneself). Passive PAD software does not require the user to take a specific action, in contrast to active PAD.

The team evaluated the algorithms with nine types of biometric presentation attacks, with examples including a person wearing a sophisticated mask designed to mimic another person’s face, holding a photo of another person up to the camera, or wearing an N95 mask that hides some of the wearer’s face.

The results varied widely among PAD algorithms, but Ngan noted one caveat: “Only a small percentage of developers could realistically claim to detect certain presentation attacks using software,” she said. “Some developers’ algorithms could catch two or three types, but none caught them all.”

Participating vendors include ROC.ai, ID R&D, iProov, Aware, Neurotechnology, Cyberlink and Onfido.

One interesting finding was that the top-performing PAD algorithms worked better together.

“We asked if it would lower the error rate if you combined the results from different algorithms. It turns out that can be a good idea,” Ngan said. “When we chose four of the top performing algorithms on the impersonation test and fused their results, we found the group did better than any one of them alone.”

The FATE track’s second study, “Face Analysis Technology Evaluation (FATE) Part 11: Face Image Quality Vector Assessment: Specific Image Defect Detection” focused on image defect detection, like determining whether a passport photo might be rejected.

“If you walk into a drugstore and get a passport photo, you want to make sure your application won’t be rejected because there is an issue with the photo,” says study author Joyce Yang, a NIST mathematician. “Blurry photos are an obvious problem, but there can also be issues with backlighting or simply wearing glasses. We explored algorithms created to flag issues that make a photo noncompliant with passport requirements.”

The evaluation was the first in the FATE track, and the NIST team received seven algorithms from five developers. The study evaluated the algorithms on 20 quality measures, such as underexposure and background uniformity, all based on internationally accepted passport standards.

Yang said that all the algorithms showed mixed results. Each had its strengths, doing better on some of the 20 measures than others. These findings will guide the development of the ISO/IEC 29794-5 standard, specifying algorithm quality checks.

Article Topics

 |   |   |   |   |   |   |   | 

Latest Biometrics News

 

App stores enter the age verification debate as social media points fingers

In the dialogue about how to effectively protect young people from accessing adult content online, social media companies take a…

 

Global DPI Summit proposes 7 keys to speed up Africa’s digital transformation

The first-ever edition of the Global DPI Summit which took place at the start of this month in the Egyptian…

 

Deepfake threat still stirring concern about election integrity

The specter of deepfakes continues to hover over the various elections of 2024, and even if little actual deepfake-driven disruption…

 

Veridas teases major facial access deployment at major American stadium

Spain’s Veridas is expanding in Europe and beyond, with movements in the biometric access control space and voice biometrics. Recent…

 

NYU Tandon Computer Science faculty tackling deepfakes

With the support of Google’s Cyber NYC Institutional Research Program (IRP), a team of NYU Tandon School of Engineering researchers…

 

Thales powers Guam Police’s revived biometric system for gun-owner IDs

The Guam Police Department (GPD) will be reinstating its digital fingerprint system for the issuance of firearms identification cards. This…

Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Most Read This Week

Featured Company

Biometrics Insight, Opinion

Digital ID In-Depth

Biometrics White Papers

Biometrics Events