FB pixel

NIST goes deeper into bias in biometrics

 

biometric digital identity verification for fraud prevention

If NIST were a TV show, it would be Dragnet, the sober, just-the-facts U.S. detective show that saw crimes solved one understated observation, one calm inquiry at a time.

Case in point: NIST, the U.S. government’s chief biometric standards organization has published “A Proposal for Identifying and Managing Bias Within Artificial Intelligence.”

It is a modest proposal, indeed. Some on the industry’s periphery say bias needs to be ripped out root and stem, but the depth and breadth of inputs into AI make the task a significant challenge.

Instead, NIST researchers are looking first for a way to spot biometric bias and then manage it, both more achievable goals leading to a final reckoning.

They also have asked for public input into how to best evaluate trust in AI. And NIST continues to drill down into AI for trust metrics. It benchmarks facial recognition accuracy with its well-known (and to some, naturally, infamous) Face Recognition Vendor Test.

The agency’s new draft report advocates for a discussion among business owners, university researchers, law experts, sociologists — even marginalized populations likely to suffer under AI bias.

To start, NIST wants to move this community to find consensus standards and a risk-based framework that could result in “trustworthy and responsible AI.”

The scale of even NIST’s new proposal becomes clear when looking at how many fundamental concepts remain undefined. Indeed, agency researchers say the idea of mitigating risk in AI itself is a “still insufficiently defined building block creating trustworthiness.”

The researchers, working with the AI community, have found eight components of trustworthy artificial intelligence, each of which can be defined differently by different communities: accuracy, explainability and interpretability, privacy, reliability, robustness, safety and security.

It goes deeper, though. In announcing the draft, NIST pointed out that there are AI systems in the field written to model “criminality” or “employment suitability,” when neither concept can be credibly measured.

Instead, software developers substitute concrete but bias-filled metrics like where a person lives or how many years of school were completed by an applicant.

Comments are welcome through August 5.

Article Topics

 |   |   |   |   |   |   |   | 

Latest Biometrics News

 

Biometric identity verification gets caught up in great expectations and politics

The next generation of biometric identity verification collides with the politics of digital identity in the most-read articles of the…

 

Todd Morris named NEC NSS President as Dr. Kathleen Kiernan retires

Todd Morris is the new President of NEC National Security Systems (NEC NSS). Morris succeeds Dr. Kathleen Kiernan, who is retiring…

 

ISO’s mDL standard can’t guarantee issuer trustworthiness

The fear that the server retrieval capability supported by the ISO/IEC 18013 standard for mobile driver’s licenses (mDLs) could be…

 

One app, two app, three app, four: DECTA study shows users have ‘wallet fatigue’

While some see the concept of a “15-minute city” as sinister, advocates say they just don’t want to go very…

 

Stop ghost students stealing college financial aid with biometric liveness

The Associated Press recently documented a vast and fast-growing fraud on the U.S. education system in which scammers use AI…

 

Russia launching digital ID ‘super-app’ inspired by Chinese WeChat

Russia is introducing a new digital identity “super-app” that will combine messaging, government and private services, e-signatures and digital IDs….

Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Biometric Market Analysis

Most Viewed This Week

Featured Company

Biometrics Insight, Opinion

Digital ID In-Depth

Biometrics White Papers

Biometrics Events