Researchers show modest progress in face biometrics bias reduction, find a possible cause
Facial recognition developers appear to be making progress in reducing “bias” or demographic disparities in biometric accuracy, based on the results of a challenge presented to the European Conference on Computer Vision 2020 (ECCV), but the problem is far from solved.
The 2020 ChaLearn Looking at People Fair Face Recognition and Analysis Challenge was sponsored by AnyVision, and shows and shows incremental improvements in performance across different groups through a variety of bias-mitigation techniques.
There were 151 participants in the challenge, 36 of which submitted to the challenge’s final phase, with 10 exceeding 0.999 AUC-ROC (area under the receiver operating characteristics curve) and low scores in the bias metrics considered.
Contestants used strategies including face preprocessing, homogenization of data distributions, bias-aware loss functions, ensemble models and others to try and reduce bias in their biometric matching results.
The top 10 teams’ algorithms were found to discriminate most often against light-skinned males (42.2 percent of cases) and least often against dark-skinned females (11.2 percent) for the matching image pairs. The reverse result was found for non-matching pairs, with 45.5 percent discrimination against dark-skinned females, compared to only 12.6 percent against light-skinned males. In testing against the paranoidai dataset, however, the lowest frequency of bias was associated with females with dark skin color for both the positive (matching) and negative (non-matching) pairs.
“Despite the high accuracy none of the methods was free of bias,” the report authors conclude. “By analysing the results of top-10 teams we found that their algorithms tend to have higher false positive rates for females with dark skin tone and for samples where both individuals wear glasses. In contrast there were higher false negative rates for males with light skin tone and for samples where both individuals are younger than 35 years.”
Algorithm architecture matters for bias, researchers find
Architectural differences between algorithms trained on the same dataset can cause significant differences in biometric performance and bias effect, according to a study searching for the source of demographic differences in the biometric accuracy of facial recognition systems by researchers at Wichita State University.
‘Understanding Fairness of Gender Classification Algorithms Across Gender-Race Groups’ examined face-based gender classification, though it has been misreported as examining biometric facial identification by at least one consumer media publication, with various convolutional neural network models evaluated in concert with the UTKFace and FairFace datasets.
In the future, the researchers plan to consider skin-tone reflectance property, facial morphology and other factors as possible causes behind demographic disparities in algorithm performance.
A practical concern for exam-takers
Biometric exam proctoring, and the problems it can cause for people in certain demographic groups, is receiving intense scrutiny in California with a State Supreme Court Justice ruling against a bid to block the technology’s use and make the bar examination open-book, Courthouse News reports.
The request had been made 15 law school deans, who expressed concern over whether the system is feasible, with students living in small apartments required to take the exam with no food or books visible, and allegations of technical problems, including with the biometric features of the proctoring software.
California Supreme Court Justice Tani Cantil-Sakauye notes that the policy of the National Conference of Bar Examiners prohibits open-book tests. The court had earlier delayed the exam, and lowered the minimum passing score from 1440 to 1390.
The State Bar is using ExamSoft’s proctoring solution, which features biometric facial recognition. A letter to the court from the American Civil Liberties Union (ACLU) said that an Arab-American examinee had tried unsuccessfully to verify his identity with the ExamSoft system 75 times, and a Black woman taking the exam is planning to shine a light directly at her face to avoid triggering false positives.
The State Bar replied that any exam violations would be determined through review by at least two human proctors and two State Bar reviewers.
The ACLU also suggested the software could violate the California Consumer Privacy Act.
Most biometric proctoring services use facial recognition, though TypingDNA launched a solution based on typing biometrics earlier this year.
Article Topics
accuracy | algorithms | biometric identification | biometric matching | biometric research | biometric testing | biometric-bias | biometrics | facial recognition | Fairface
Comments