FB pixel

NIST goes deeper into bias in biometrics

 

biometric digital identity verification for fraud prevention

If NIST were a TV show, it would be Dragnet, the sober, just-the-facts U.S. detective show that saw crimes solved one understated observation, one calm inquiry at a time.

Case in point: NIST, the U.S. government’s chief biometric standards organization has published “A Proposal for Identifying and Managing Bias Within Artificial Intelligence.”

It is a modest proposal, indeed. Some on the industry’s periphery say bias needs to be ripped out root and stem, but the depth and breadth of inputs into AI make the task a significant challenge.

Instead, NIST researchers are looking first for a way to spot biometric bias and then manage it, both more achievable goals leading to a final reckoning.

They also have asked for public input into how to best evaluate trust in AI. And NIST continues to drill down into AI for trust metrics. It benchmarks facial recognition accuracy with its well-known (and to some, naturally, infamous) Face Recognition Vendor Test.

The agency’s new draft report advocates for a discussion among business owners, university researchers, law experts, sociologists — even marginalized populations likely to suffer under AI bias.

To start, NIST wants to move this community to find consensus standards and a risk-based framework that could result in “trustworthy and responsible AI.”

The scale of even NIST’s new proposal becomes clear when looking at how many fundamental concepts remain undefined. Indeed, agency researchers say the idea of mitigating risk in AI itself is a “still insufficiently defined building block creating trustworthiness.”

The researchers, working with the AI community, have found eight components of trustworthy artificial intelligence, each of which can be defined differently by different communities: accuracy, explainability and interpretability, privacy, reliability, robustness, safety and security.

It goes deeper, though. In announcing the draft, NIST pointed out that there are AI systems in the field written to model “criminality” or “employment suitability,” when neither concept can be credibly measured.

Instead, software developers substitute concrete but bias-filled metrics like where a person lives or how many years of school were completed by an applicant.

Comments are welcome through August 5.

Article Topics

 |   |   |   |   |   |   |   | 

Latest Biometrics News

 

NZ Parliamentary Committee recommends age assurance for social media

Age assurance should be required for people accessing social media in New Zealand to keep people under 16 away from…

 

EU kicks off panel discussions on social media age restrictions

The European Commission has taken another step towards regulating child safety online, organizing the first panel on age restrictions for…

 

EU can rein in AI agents with EUDI Wallets and business wallets: WE BUILD

The EU should take a coordinated approach to integrating AI agents into digital transactions, with special attention on payments, according…

 

Indonesia to ban under-16s from social media, implement standard-based age checks

Indonesia, the biggest country in Southeast Asia, is taking the momentous step to ban social media for under 16s. Communication…

 

GenKey takes over biometric passport, national ID card production in Comoros

East African archipelago nation Comoros has selected GenKey to produce its biometric passports and national ID cards. GenKey replaces Semlex,…

 

India mandates medical colleges to issue ABHA patient IDs in digital health push

India’s National Medical Commission (NMC) has directed that all medical colleges must generate and issue patient IDs to all those…

Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Biometric Market Analysis and Buyer's Guides

Most Viewed This Week

Featured Company

Biometrics Insight, Opinion

Digital ID In-Depth

Biometrics White Papers

Biometrics Events