FB pixel

Idiap researchers propose biometric fairness metrics to decouple bias from accuracy

Categories Biometric R&D  |  Biometrics News
Idiap researchers propose biometric fairness metrics to decouple bias from accuracy
 

Several new measures for quantifying demographic differentials, or bias, in biometric identity verification systems are suggested in a paper which has been accepted for publication. The new metrics represent an effort to move beyond whether a sample has been matched, to include consideration of how well, by considering matching scores.

The paper ‘Fairness Index Measures to Evaluate Bias in Biometric Recognition’ was authored by Sébastien Marcel and Ketan Kotwal of the Idiap Research Institute. It has been accepted by the International Conference on Pattern Recognition Workshops.

While biometric bias is often associated with facial recognition, the researches say their metrics are agnostic to the modality used.

The recently-proposed Fairness Discrepancy Rate (FDR) is considered, along with the use of “the ROC (Receiver Operating Characteristic) curve as a proxy to measure demographic differentials.”

“While few, existing fairness measures are based on post-decision data (such as verification accuracy) of biometric systems, we discuss how pre-decision data (score distributions) provide useful insights towards demographic fairness,” the paper’s authors write in the abstract of their paper.

Marcel and Kotwal propose methods based on weighted fusion of results for each of the three measures, and three variants for each measure to allow assessment from multiple perspectives.

Separation Fairness Index (SFI) measures how close genuine and imposter-matching scores for different demographic groups depart from expected values. Compactness Fairness Index (CFI) measures the spread of scores across different groups. Distribution Fairness Index (DFI) measures equitability towards overall score distributions. In each case, similar scores and therefore fair systems are indicated by closeness to a value of 1.0.

If successful, these metrics indicate how fair a biometric verification is separately from its accuracy, but are supposed to compliment, not replace, outcome-based fairness measures.

NIST is also working on how to measure biometric bias, and currently seeking feedback on the state of the art.

Article Topics

 |   |   |   |   |   |   |   |   | 

Latest Biometrics News

 

NZ Parliamentary Committee recommends age assurance for social media

Age assurance should be required for people accessing social media in New Zealand to keep people under 16 away from…

 

EU kicks off panel discussions on social media age restrictions

The European Commission has taken another step towards regulating child safety online, organizing the first panel on age restrictions for…

 

EU can rein in AI agents with EUDI Wallets and business wallets: WE BUILD

The EU should take a coordinated approach to integrating AI agents into digital transactions, with special attention on payments, according…

 

Indonesia to ban under-16s from social media, implement standard-based age checks

Indonesia, the biggest country in Southeast Asia, is taking the momentous step to ban social media for under 16s. Communication…

 

GenKey takes over biometric passport, national ID card production in Comoros

East African archipelago nation Comoros has selected GenKey to produce its biometric passports and national ID cards. GenKey replaces Semlex,…

 

India mandates medical colleges to issue ABHA patient IDs in digital health push

India’s National Medical Commission (NMC) has directed that all medical colleges must generate and issue patient IDs to all those…

Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Biometric Market Analysis and Buyer's Guides

Most Viewed This Week

Featured Company

Biometrics Insight, Opinion

Digital ID In-Depth

Biometrics White Papers

Biometrics Events