Race, gender bias in matching algorithm remain – DHS report
An annual U.S. Homeland Security study of bias in commercial biometric software, released in August, suggest similar results found in similar work done by the department since 2018.
AI has a harder time matching face images when the subject is female, darker in skin tone or wears glasses, according to the report.
In fact, 57 percent of models delivered lower mated similarity schools with darker skin tones. Eyewear confused algorithms on 96 percent of models.
However, image-acquisition products and matching algorithms, according to department researchers Cynthia Cook, John Howard, Yevgeniy Sirotin and Jerry Tipton, have improved their accuracy notably for the faces of older subjects.
The report, although dated August 2023, is the most recent assessment and contains data current as of 2021. Staff from the Maryland Test Facility and the DHS’ Science and Technology Directorate performed the analysis.
Results released this summer largely “remain consistent” since the initial 2018 findings. Darker skin tone and gender trip up code, although matching performance for females can be significantly improved if historic gallery images are closer in time to a probe image.
For three-quarters of models, women had lower mated similarity scores. The thinking is that females change their personal styling more often than males do.
The assessments are conducted in a specialized setting. Commercial biometric systems are pressed into a simulated unattended border control environment experiencing a high throughput of people.
Matching and acquisition testing involved 158 system combinations tested from 2019 through 2021. Cumulatively, 1,590 volunteers with 949 unique individuals.
Article Topics
biometric-bias | biometrics | demographic fairness | DHS | facial recognition
Comments