US oversight body calls for more real-world biometrics testing, bias protections
A federal U.S. agency wants more testing of biometrics applications in the real world, stronger protections against bias and more transparency in biometrics use, after talking the matter over with other agencies, “academics, advocacy groups and technology experts.”
The Government Accountability Office (GAO) identified six concerns shared by stakeholders and five considerations to address those concerns in the 66-page report to Congressional Committees on “Biometric Identification Technologies: Considerations to Address Information Gaps and Other Stakeholder Concerns.”
Concerns around biometrics include exacerbation of systemic inequality, lack of transparency, biased outcomes, limited understanding of the technology’s performance and effects, the technical expertise of users and the security and privacy of sensitive data. These concerns can be addressed, GAO suggests, through performance evaluations, improved training and guidance, promoting transparency, new and comprehensive data privacy laws or guidance and adopting a risk-based approach to rules, risk and use.
The GAO found in a report released late last year that 15 of 20 agencies using AI are failing to meet their transparency requirements, consistent with 2022 findings about oversight and the use of facial recognition by federal agencies.
In its introductory address to Congressional Committees, the GAO notes the 2019 NIST test on demographic differentials, but not the follow-up evaluation in 2022. The agency also notes correctly that six Black individuals have been falsely matched by facial recognition systems, leading to wrongful arrests. It says “all of those individuals have been African American,” neglecting to mention that the latest false arrest attributed to facial recognition to reach the press involved a white person.
The report outlines biometric technologies and their applications, and the American federal government’s role in funding, regulating and providing guidance on their use.
The accuracy of biometrics has improved, GAO says, but challenges in real-world implementations remain, including those related to image quality and demographic disparities, which may be related. “Both over- and under-exposed images can result in false negatives,” Gao notes. NIST officials reported improving accuracy and greater tolerance for changes in the appearance of people by face biometrics algorithms. “However,” continues the report, “false positive rates are still higher for certain demographic groups that are not sufficiently represented in the training data such as elderly East Asian women and elderly East African women.”
More operational testing is needed to understand real-world implementation challenges, according to the GAO. In voice biometrics, more research is needed into how different conditions and populations impact system performance.
The GAO also collected a sample of positive and negative impacts related to biometrics use reported by interview subjects. The majority were negative, and include an advocacy group that estimates 20 percent of unemployment insurance applicants were not able to complete identity verification with selfie biometrics.
The use of biometrics to deliver unemployment insurance during the COVID-19 pandemic is examined, with the GAO concluding that “Some communities may have experienced denials or delays in receiving benefits” due to technology bias.
The CBP One app and law enforcement uses of facial recognition are also considered, along with applications in financial services, healthcare and education.
Article Topics
biometric identification | biometric-bias | biometrics | demographic fairness | GAO (Government Accountability Office) | U.S. Government
Comments