FB pixel

Idiap researchers propose biometric fairness metrics to decouple bias from accuracy

Categories Biometric R&D  |  Biometrics News
Idiap researchers propose biometric fairness metrics to decouple bias from accuracy
 

Several new measures for quantifying demographic differentials, or bias, in biometric identity verification systems are suggested in a paper which has been accepted for publication. The new metrics represent an effort to move beyond whether a sample has been matched, to include consideration of how well, by considering matching scores.

The paper ‘Fairness Index Measures to Evaluate Bias in Biometric Recognition’ was authored by Sébastien Marcel and Ketan Kotwal of the Idiap Research Institute. It has been accepted by the International Conference on Pattern Recognition Workshops.

While biometric bias is often associated with facial recognition, the researches say their metrics are agnostic to the modality used.

The recently-proposed Fairness Discrepancy Rate (FDR) is considered, along with the use of “the ROC (Receiver Operating Characteristic) curve as a proxy to measure demographic differentials.”

“While few, existing fairness measures are based on post-decision data (such as verification accuracy) of biometric systems, we discuss how pre-decision data (score distributions) provide useful insights towards demographic fairness,” the paper’s authors write in the abstract of their paper.

Marcel and Kotwal propose methods based on weighted fusion of results for each of the three measures, and three variants for each measure to allow assessment from multiple perspectives.

Separation Fairness Index (SFI) measures how close genuine and imposter-matching scores for different demographic groups depart from expected values. Compactness Fairness Index (CFI) measures the spread of scores across different groups. Distribution Fairness Index (DFI) measures equitability towards overall score distributions. In each case, similar scores and therefore fair systems are indicated by closeness to a value of 1.0.

If successful, these metrics indicate how fair a biometric verification is separately from its accuracy, but are supposed to compliment, not replace, outcome-based fairness measures.

NIST is also working on how to measure biometric bias, and currently seeking feedback on the state of the art.

Article Topics

 |   |   |   |   |   |   |   |   | 

Latest Biometrics News

 

ACCS announces participants in Australia’s Age Assurance Technology Trial

In keeping with its philosophy of transparency by default in running Australia’s Age Assurance Technology Trial, the Age Check Certification…

 

DPI-as-a-Packaged Solution marks major milestone with Trinidad and Tobago rollout

The first ever implementation of DaaS — DPI-as-a-Packaged Solution — is going live in Trinidad and Tobago in a test…

 

AI agents spark musings on identity, payments and wallets

AI agents continue to attract attention, including in the digital identity industry, which sees an opportunity for innovation. Their importance…

 

Trump deregulation is re-shaping the future of biometric surveillance in policing

The advent of AI has exponentially increased the capabilities of biometric tools such as facial recognition, fingerprint analysis, and voice…

 

World expands Android support for World ID credentials

World’s positive relationship with Malaysia continues, with the launch of Android support for World ID Credentials in the country, following…

 

Sri Lanka national data exchange to connect digital ID and public services

A fully developed foundational ID system, including citizen registration, may take 18 to 24 months for Sri Lanka to implement,…

Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Most Viewed This Week

Featured Company

Biometrics Insight, Opinion

Digital ID In-Depth

Biometrics White Papers

Biometrics Events