FB pixel

American researchers probe where biometric bias comes in and how to measure it

American researchers probe where biometric bias comes in and how to measure it
 

A pair of papers on why biometric systems are so often found to be less effective with some demographic groups and how to measure those disparities have been published by researchers from the Identity and Data Sciences Lab at the Maryland Test Facility.

John Howard of MdTF, which is used for DHS’ biometrics tests, pointed out the papers in a LinkedIn post.

‘Disparate impact in facial recognition stems from the broad homogeneity effect: A case study and method to resolve’ attributes the problem of biometric bias to “demographic clustering.” This is the phenomenon where the use of features determined (at least in part) by the gender or ethnicity of people increases similarity scores between individuals.

The paper shows that it is possible to remove feature patterns shared within demographic groups while keeping distinct features that can be used for facial recognition. The team used linear dimensionality techniques to increase the “fairness” of two ArcFace algorithms, as measured in four different ways, without lowering true match rates.

‘Evaluating proposed fairness models for face recognition algorithms’ considers the Fairness Discrepancy Rate (FDR) proposed by Idiap researchers and the Inequity Rate (IR) proposed by NIST researchers. Both metrics are found to be difficult to interpret due to inherent mathematical characteristics. The study authors therefore propose the Functional Fairness Measure Criteria (FFMC) to help with interpretations of the above metrics.

They also develop a new measure, the Gini Aggregation Rate for Biometric Equitability (GARBE). This measurement technique is based on the Gini coefficient, which is a statistical measure of dispersion typically used in measuring income inequality.

The work on an evaluation method is intended to directly support ISO 19795-10, which sets an international standard for bias in facial recognition.

Both papers appeared in the publication of the 26th International Conference on Pattern Recognition (ICPR 2022) Fairness in Biometrics Workshop.

Article Topics

 |   |   |   |   |   | 

Latest Biometrics News

 

Smart glasses and the new DHS surveillance budget

The Department of Homeland Security’s (DHS) Fiscal Year (FY) 2027 budget justification lays out an expansive biometric and identity tech…

 

Voice AI expands attack surface for speaker biometrics as APIs proliferate

Deepfake voices are already a challenge for authentication systems. But the task is getting tougher, as big players pursue voice…

 

NetChoice wins in Arkansas, but faces forever war against age assurance

The battle over age assurance legislation in the United States has reached its next level. As the global tide turns…

 

UIDAI selects 20 bug bounty hunters to bolster India’s digital ID security

The Unique Identification Authority of India (UIDAI) has launched a structured bug bounty program. The authority will open its core…

 

Netherlands gov’t, businesses fight digital fraud with ‘Anti-Phishing Shield’

Public authorities in the Netherlands and partners from the private sector, including internet service providers (ISPs), have jointly developed and…

 

Persona integrates with ConnectID for age checks through Australian payments network

Persona has announced an integration with ConnectID, the Australian digital identity exchange created by the country’s payments industry, Australian Payments…

Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Biometric Market Analysis and Buyer's Guides

Most Viewed This Week

Featured Company

Biometrics Insight, Opinion

Digital ID In-Depth

Biometrics White Papers

Biometrics Events