FB pixel

NIST seeks feedback on measurement of biometric bias

FRVT follow-up on demographic differentials shows progress
NIST seeks feedback on measurement of biometric bias
 

Bias in face biometrics has decreased as algorithm developers focus their efforts, according to the latest test results from the U.S. National Institute of Standards and Technology. Exactly how much they have improved, however, is difficult to even measure.

A draft of the NIST Face Recognition Vendor Test (FRVT) Part 8: Summarizing Demographic Differentials has been published for comment, with the agency seeking ways to improve its measurement of variations in biometric accuracy between groups of subjects.

Much of the document is given to explaining different statistics used to measure bias, and the advantages and limitations to different ways of doing so. The previous report on demographic differentials, FRVT Part 3, showed very low differentials among the most accurate algorithms, but significant variation overall.

There are three measures termed ‘Functional Fairness Measure Criteria (FFMC),’ suggested by John Howard, Eli Laird, Yevgeniy Sirotin, Rebecca Rubin, Jerry Tipton and Arun Vemury from the Maryland Test Facility. NIST adds two more, but finds none is perfectly suited to clearly expressing the test results.

NIST also delves into how to deal with comparisons between low error rate values. This includes challenges like how to avoid suggesting that an algorithm which returns a near-zero error rate for one demographic and a still low, but relatively higher error rate for another, is worse than an algorithm which is less accurate for both.

The ‘Max/GeoMean’ measure is identified as the leading candidate, and presented in published test results.

Vendor results are shared by NIST, with maximum false non-match rate (FNMR) and false match rate (FMR) each compared to the geometric mean rates. A result of 1 indicates matching parity. FNMR Max/GeoMean results are almost all between 1 and 2, while FMR Max/GeoMean varies all the way from slightly over 6 to over 300.

Future NIST research will apply the same kind of testing to 1:N (identification) biometric systems, and an ISO standard is in development.

Corsight claims leadership in bias reduction

Corsight AI has achieved an equal false match rate for male and female subjects between the Black and white test groups, according to a company announcement.

The company’s algorithm scored a 1.01 FNMR Max/GeoMean and a 20.63 FMR Max/GeoMean in the July 2022 update of test results.

NIST also uses algorithms from Corsight, Clearview AI, CUbox, DeepGlint, Idemia, NtechLab and Paravision as examples in a comparison of error rates with different demographics compared with Eastern European subjects.

“We’re thrilled because this is another step forward in countering claims that bias is damaging the effectiveness of facial recognition technology,” comments Corsight Chief Privacy Officer Tony Porter. “The argument that facial recognition software is not capable of being fair is frozen in time and the performance of Corsight’s latest submission demonstrates that.”

Article Topics

 |   |   |   |   |   |   |   |   |   |   | 

Latest Biometrics News

 

Edge computing firm Blaze IPOs, announces security deal with Vsblty

AI-powered edge computing company Blaize, known for its collaborations with biometric surveillance developers, went public on the Nasdaq on Tuesday….

 

Illinois to get mobile driver’s licenses in Apple Wallet by end of 2025

Illinois is “working to bring IDs in Apple Wallet to Illinois residents in the future with the goal of launching…

 

Singapore slaps app stores with age verification requirement for adult apps

Singapore will impose age assurance requirements on app stores starting in April 2025, blocking underage users from downloading social media…

 

Paravision’s next generation algorithm cracks top 5 on NIST FRTE 1:N benchmark

Facial recognition from San Francisco-based Paravision has landed in the global top 5 in the primary benchmark of the latest…

 

Age assurance legislation drives talk on how to create an age-aware internet

There are few hotter topics in biometrics and regulatory circles right now than the issue of age assurance as a…

 

Breach exposes privacy risk from de-anonymization of location data

Gravy Analytics, a prominent location data broker, has disclosed that a significant data breach potentially exposed through de-anonymization the precise…

Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Most Viewed This Week

Featured Company

Biometrics Insight, Opinion

Digital ID In-Depth

Biometrics White Papers

Biometrics Events