Face biometrics researchers find possible route to better accuracy across races, genders

A sampling of commercial facial recognition algorithms seems to suggest that face biometrics systems are currently too dependent on certain attributes to differentiate as easily between different people who share race and gender categories. A paper has been published by biometrics researchers from the U.S. Department of Homeland Security’s Maryland Test Facility (MdTF) and the Science and Technology Directorate exploring the issue with five unnamed commercial facial recognition algorithms, with a commercial iris recognition algorithm as a control.
The testing was conducted by John Howard, Yevgeniy Sirotin and Jerry Tipton of MdTF and Arun Vemury of S&T.
Their paper on ‘Quantifying the Extent to Which Race and Gender Features Determine Identity in Commercial Face Recognition Algorithms’ suggests that those facial recognition algorithms searching large databases of homogenous images tend to result in disparate treatment based on race and gender. The research builds on the scientist’s previous work which showed that images of people from the same demographic group tend to be scored more similar to each other than when compared to people from a different demographic group, a phenomenon they call “broad homogeneity.”
Analysis of the principal components used by the biometrics algorithms shows that most variations between vectors, or facial appearance, are not related to race or gender, though around 10 percent do. This means that it should be possible for the algorithms to more consistently avoid delivering unfair accuracy differentials.
“Further, separation between mated and non-mated score distributions reconstructed exclusively using PCs (principal components) that do not cluster individuals by race and gender was only modestly reduced, suggesting CFRAs (commercial facial recognition algorithms) can maintain acceptable performance even when ignoring face features associated with race and gender,” the report authors write.
Components with no significant clustering based on demographics account for 62 percent of total score variance for the facial recognition algorithms.
The researchers also cite recent research that suggests demographic attributes can be removed from images and remain effective for matching with facial recognition algorithms.
Human review can help, the researchers suggest, but ultimately, the technology needs to continue evolving.
“Developing demographically-blind CFRAs that explicitly ignore face features associated with race and gender will help maintain fairness as use of this technology grows,” the report says. “We believe that developing such algorithms and demonstrating fairness, including reduced demographic clustering, should be a focus for companies selling face recognition technology.”
The research will be discussed during the upcoming International Face Performance Conference (IFPC) 2020.
ID4Africa’s Executive Director Dr. Joseph Atick has called on Facebook to use its resources to help settle any question of whether facial recognition algorithms can be developed to meet high standards for accuracy without significant difference in performance with subjects based on race and gender.
Article Topics
accuracy | algorithms | biometric identification | biometrics | biometrics research | dataset | DHS | facial recognition | Maryland Test Facility (MdTF)
Comments