American researchers probe where biometric bias comes in and how to measure it
A pair of papers on why biometric systems are so often found to be less effective with some demographic groups and how to measure those disparities have been published by researchers from the Identity and Data Sciences Lab at the Maryland Test Facility.
‘Disparate impact in facial recognition stems from the broad homogeneity effect: A case study and method to resolve’ attributes the problem of biometric bias to “demographic clustering.” This is the phenomenon where the use of features determined (at least in part) by the gender or ethnicity of people increases similarity scores between individuals.
The paper shows that it is possible to remove feature patterns shared within demographic groups while keeping distinct features that can be used for facial recognition. The team used linear dimensionality techniques to increase the “fairness” of two ArcFace algorithms, as measured in four different ways, without lowering true match rates.
‘Evaluating proposed fairness models for face recognition algorithms’ considers the Fairness Discrepancy Rate (FDR) proposed by Idiap researchers and the Inequity Rate (IR) proposed by NIST researchers. Both metrics are found to be difficult to interpret due to inherent mathematical characteristics. The study authors therefore propose the Functional Fairness Measure Criteria (FFMC) to help with interpretations of the above metrics.
They also develop a new measure, the Gini Aggregation Rate for Biometric Equitability (GARBE). This measurement technique is based on the Gini coefficient, which is a statistical measure of dispersion typically used in measuring income inequality.
The work on an evaluation method is intended to directly support ISO 19795-10, which sets an international standard for bias in facial recognition.
Both papers appeared in the publication of the 26th International Conference on Pattern Recognition (ICPR 2022) Fairness in Biometrics Workshop.