FB pixel

NIST goes deeper into bias in biometrics

 

biometric digital identity verification for fraud prevention

If NIST were a TV show, it would be Dragnet, the sober, just-the-facts U.S. detective show that saw crimes solved one understated observation, one calm inquiry at a time.

Case in point: NIST, the U.S. government’s chief biometric standards organization has published “A Proposal for Identifying and Managing Bias Within Artificial Intelligence.”

It is a modest proposal, indeed. Some on the industry’s periphery say bias needs to be ripped out root and stem, but the depth and breadth of inputs into AI make the task a significant challenge.

Instead, NIST researchers are looking first for a way to spot biometric bias and then manage it, both more achievable goals leading to a final reckoning.

They also have asked for public input into how to best evaluate trust in AI. And NIST continues to drill down into AI for trust metrics. It benchmarks facial recognition accuracy with its well-known (and to some, naturally, infamous) Face Recognition Vendor Test.

The agency’s new draft report advocates for a discussion among business owners, university researchers, law experts, sociologists — even marginalized populations likely to suffer under AI bias.

To start, NIST wants to move this community to find consensus standards and a risk-based framework that could result in “trustworthy and responsible AI.”

The scale of even NIST’s new proposal becomes clear when looking at how many fundamental concepts remain undefined. Indeed, agency researchers say the idea of mitigating risk in AI itself is a “still insufficiently defined building block creating trustworthiness.”

The researchers, working with the AI community, have found eight components of trustworthy artificial intelligence, each of which can be defined differently by different communities: accuracy, explainability and interpretability, privacy, reliability, robustness, safety and security.

It goes deeper, though. In announcing the draft, NIST pointed out that there are AI systems in the field written to model “criminality” or “employment suitability,” when neither concept can be credibly measured.

Instead, software developers substitute concrete but bias-filled metrics like where a person lives or how many years of school were completed by an applicant.

Comments are welcome through August 5.

Article Topics

 |   |   |   |   |   |   |   | 

Latest Biometrics News

 

New PIA for US Secret Service’s use of facial recognition raises questions

The U.S. Department of Homeland Security (DHS) issued a new Privacy Impact Assessment (PIA) for the U.S. Secret Service’s (USSS)…

 

Report explores efforts to curb environmental risks posed by identity documents

In the past couple of years, the identity industry has been involved in efforts to shift away from the use…

 

Malta opposition party demands minister’s resignation over ID card fraud scandal

Malta’s Green Party, ADPD, has intensified its demands for the resignation of a Maltese government minister following revelations of a…

 

Philippines’ central bank enters arbitration over failed ID card project

After the Philippines’ central bank decided to cancel its contract with identification system company AllCard Inc. (ACI) to produce the…

 

Visa biometrics provider VFS in talks to sell minority stake to Temasek

A significant minority stake of about 20 percent in the Blackstone-owned digital services outsourcing company VFS Global might be sold…

 

UK Virgin Islands launch digital transformation tender

Tourist hotspots Thailand, Sri Lanka, the Seychelles and the Virgin Islands are not just offering beaches and sunshine, they are…

Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Most Read This Week

Featured Company

Biometrics Insight, Opinion

Digital ID In-Depth

Biometrics White Papers

Biometrics Events