FB pixel

Standard for measuring face biometrics bias coming, AI bias mitigation needs wider scope

ISO/IEC and NIST, respectively, weigh in
Standard for measuring face biometrics bias coming, AI bias mitigation needs wider scope
 

A working draft of a standard for measuring bias in face biometrics has been published by the International Standards Organization (ISO) and the International Electrotechnical Commission (IEC), just as the U.S. National Institute of Standards and technology (NIST) updates its work on bias in artificial intelligence more broadly.

The ISO/IEC 19795-10 ‘Information Technology – Biometric performance testing and reporting – Part 10: Quantifying biometric system performance variation across demographic groups’ standard under development has been in development for two years, according to a LinkedIn post from Maryland Test Facility Principal Data Scientist John Howard.

Comments on the draft are due by May 6.

Bias (or demographic differential) testing in facial recognition is so far limited mostly to testing by NIST.

NIST argues for ‘soci-technical’ approach

An updated technical policy document from NIST recommends widening the scope of the search for sources of bias in AI systems. This, the organization says, can help improve the identification of AI bias, and mitigate its harms.

The revised version of NIST Special Publication 1270, ‘Towards a Standard for Identifying and Managing Bias in Artificial Intelligence,’ extends that scope to take in the social context AI systems are deployed in, using an iceberg metaphor. Statistical and computational biases make up only the visible, ‘above water’ portion of the iceberg, with human biases and systemic biases forming large sections below.

“If we are to develop trustworthy AI systems, we need to consider all the factors that can chip away at the public’s trust in AI,” says NIST Principal Investigator for AI Bias Reva Schwartz, one of the reports authors. “Many of these factors go beyond the technology itself to the impacts of the technology, and the comments we received from a wide range of people and organizations emphasized this point.”

The authors argue that a ‘socio-technical’ approach is needed to effectively mitigate bias in AI.

The initial draft was published last year, and identified eight components that contribute to making AI trustworthy.

Article Topics

 |   |   |   |   |   |   |   | 

Latest Biometrics News

 

London to introduce permanent live facial recognition cameras

London police have announced their plans to install the UK’s first permanent live facial recognition cameras, catching potential criminals by…

 

UK govt not giving up on Voter ID for 2025 local elections

Removing voter ID from the UK’s elections is not on the table, Minister for Homelessness and Democracy Rushanara Ali confirmed…

 

China strengthening face biometrics regulation to mandate choice, consent

China’s boom in selfie biometrics and facial recognition may already have peaked, with new regulations published so businesses can plan…

 

Intellicheck, Raonsecure invest in new IDV markets for steady growth

Market and investment strategy loom over the latest set of financial results from digital identity and biometrics providers. Intellicheck credits…

 

Facial recognition tender for Toronto police draws interest from major vendors

Eleven biometrics providers, including large international firms, are vying to provide Toronto police with a new facial recognition system, which…

 

OBIM spec enables vendors to build products to interact with DHS biometric system

The U.S. Department of Homeland Security (DHS) has opened its specification for interacting with the nation’s largest biometrics database to…

Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Most Viewed This Week

Featured Company

Biometrics Insight, Opinion

Digital ID In-Depth

Biometrics White Papers

Biometrics Events