DHS suggests face biometrics bias reduction method, quantifies demographic effects
Removing facial features associated with race and gender can make face biometric algorithms less likely to confuse people with others based on those demographics, according to new research from the U.S. Department of Homeland Security.
The paper ‘Quantifying the Extent to Which Race and Gender Features Determine Identity in Commercial Face Recognition Algorithms’ reveals the finding that race and gender sameness contributes about 10 percent to the variation of face biometric similarity scores.
The composition of the biometric database an image is being matched against could have a major influence on the extent of differences in accuracy for different groups, particularly in police applications, DHS Maryland Test Facility Principal Data Scientist John Howard explains in a post to LinkedIn. He suggests that the existing body of work on fairness in face biometrics based on 1:1 matching, or verification, does not necessarily provide accurate insight into the problem in 1:N or recognition scenarios.
The paper was authored by Howard, Yevgeniy Sirotin and Jerry Tipton of DHS’ Maryland Test Facility (MdTF), along with Arun Vemury of DHS’ Science and Technology Directorate.
They used data collected during the 2018 Biometric Technology Rally to test what features are used to establish identity. They found that face biometrics algorithms, though not iris recognition algorithms, use features associated with race and gender. Similarity scores are higher for people with the same race or gender when compared with five leading facial recognition algorithms.
The researchers then propose a system for quantifying the use of features associated with race and gender, and analyzed the possibility of removing these features from consideration. The algorithms’ performance was reduced, when they removed these features, according to the paper, but not below useful levels.
Most commercial face biometric algorithms do not appear to limit feature extraction or consideration to those not associated with race and gender, which the researchers speculate could be due to the use of deep convolutional neural networks, which have been known to take “short cuts” by using correlated, but ultimately spurious data in object classification.
The use of certain features and balanced biometric reference galleries would represent a departure from the common current approach to bas reduction, the researchers point out, which focusses on delivering similar false match error rates for different demographic groups.
NIST is also digging into bias in AI in an attempt to help the industry reduce and ultimately eliminate it.
Article Topics
accuracy | algorithms | biometric matching | biometric-bias | biometrics | biometrics research | DHS | face biometrics | facial recognition | Maryland Test Facility (MdTF)
Comments