FB pixel

Standard for measuring face biometrics bias coming, AI bias mitigation needs wider scope

ISO/IEC and NIST, respectively, weigh in
Standard for measuring face biometrics bias coming, AI bias mitigation needs wider scope
 

A working draft of a standard for measuring bias in face biometrics has been published by the International Standards Organization (ISO) and the International Electrotechnical Commission (IEC), just as the U.S. National Institute of Standards and technology (NIST) updates its work on bias in artificial intelligence more broadly.

The ISO/IEC 19795-10 ‘Information Technology – Biometric performance testing and reporting – Part 10: Quantifying biometric system performance variation across demographic groups’ standard under development has been in development for two years, according to a LinkedIn post from Maryland Test Facility Principal Data Scientist John Howard.

Comments on the draft are due by May 6.

Bias (or demographic differential) testing in facial recognition is so far limited mostly to testing by NIST.

NIST argues for ‘soci-technical’ approach

An updated technical policy document from NIST recommends widening the scope of the search for sources of bias in AI systems. This, the organization says, can help improve the identification of AI bias, and mitigate its harms.

The revised version of NIST Special Publication 1270, ‘Towards a Standard for Identifying and Managing Bias in Artificial Intelligence,’ extends that scope to take in the social context AI systems are deployed in, using an iceberg metaphor. Statistical and computational biases make up only the visible, ‘above water’ portion of the iceberg, with human biases and systemic biases forming large sections below.

“If we are to develop trustworthy AI systems, we need to consider all the factors that can chip away at the public’s trust in AI,” says NIST Principal Investigator for AI Bias Reva Schwartz, one of the reports authors. “Many of these factors go beyond the technology itself to the impacts of the technology, and the comments we received from a wide range of people and organizations emphasized this point.”

The authors argue that a ‘socio-technical’ approach is needed to effectively mitigate bias in AI.

The initial draft was published last year, and identified eight components that contribute to making AI trustworthy.

Article Topics

 |   |   |   |   |   |   |   | 

Latest Biometrics News

 

New FaceTec CLO among avalanche of appointments in biometrics and fraud protection

New executives have been named by biometrics providers FaceTec, Pindrop and Fingerprint Cards, along with C-level appointments by Prove and…

 

Indonesia issues call for World Bank-backed digital identification project

Indonesia is looking for a company providing consulting services as a part of its upcoming digital transformation project backed by…

 

Affinidi data sharing framework leverages privacy-preserving open standards

Affinidi, a company specializing in data and identity management, unveiled the Affinidi Iota framework at the WeAreDevelopers World Congress. This…

 

Sri Lanka set for January biometric passport launch, plans airport upgrades

Sri Lanka is preparing to begin issuing biometric passports with electronic chips embedded as of January, 2025, according to a…

 

Vending machines with biometric age verification roll out in Germany, US

Vending machines are growing in popularity as a way to sell age-restricted products around the world, with Diebold Nixdorf algorithms…

 

San Francisco police hit with lawsuit over facial recognition use

In 2019, San Francisco became the first city in the U.S. to ban facial recognition technology, forcing the police and…

Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Most Read This Week

Featured Company

Biometrics Insight, Opinion

Digital ID In-Depth

Biometrics White Papers

Biometrics Events