NIST goes deeper into bias in biometrics
If NIST were a TV show, it would be Dragnet, the sober, just-the-facts U.S. detective show that saw crimes solved one understated observation, one calm inquiry at a time.
Case in point: NIST, the U.S. government’s chief biometric standards organization has published “A Proposal for Identifying and Managing Bias Within Artificial Intelligence.”
It is a modest proposal, indeed. Some on the industry’s periphery say bias needs to be ripped out root and stem, but the depth and breadth of inputs into AI make the task a significant challenge.
Instead, NIST researchers are looking first for a way to spot biometric bias and then manage it, both more achievable goals leading to a final reckoning.
They also have asked for public input into how to best evaluate trust in AI. And NIST continues to drill down into AI for trust metrics. It benchmarks facial recognition accuracy with its well-known (and to some, naturally, infamous) Face Recognition Vendor Test.
The agency’s new draft report advocates for a discussion among business owners, university researchers, law experts, sociologists — even marginalized populations likely to suffer under AI bias.
To start, NIST wants to move this community to find consensus standards and a risk-based framework that could result in “trustworthy and responsible AI.”
The scale of even NIST’s new proposal becomes clear when looking at how many fundamental concepts remain undefined. Indeed, agency researchers say the idea of mitigating risk in AI itself is a “still insufficiently defined building block creating trustworthiness.”
The researchers, working with the AI community, have found eight components of trustworthy artificial intelligence, each of which can be defined differently by different communities: accuracy, explainability and interpretability, privacy, reliability, robustness, safety and security.
It goes deeper, though. In announcing the draft, NIST pointed out that there are AI systems in the field written to model “criminality” or “employment suitability,” when neither concept can be credibly measured.
Instead, software developers substitute concrete but bias-filled metrics like where a person lives or how many years of school were completed by an applicant.
Comments are welcome through August 5.
Article Topics
accuracy | AI | biometric-bias | biometrics | biometrics research | explainability | NIST | research and development | standards
Comments