Report says lack of diversity in face biometrics datasets extends to expression, emotion
A pair of new studies on algorithmic bias in face biometrics both suggest that the lack of diversity in datasets needs to be addressed. Research on expression imbalance suggests a way to make the problem better, but an explainer from the Turing Institute calls for pushback against the seemingly inevitable proliferation of facial recognition.
An academic study on ‘Facial Expressions as a Vulnerability in Face Recognition’ from four researchers associated with MIT, Barcelona’s Universitat Oberta de Catalunya, and the Universidad Autonoma de Madrid, suggests that databases with greater balance of facial expressions should be used to train facial recognition models.
The lack of diverse expressions could create a security vulnerability, the researchers suggest, impacting the matching scores returned by facial recognition systems.
The paper builds on their previous work on ‘Learning Emotional-Blinded Face Representations’, which described the imbalance between facial expressions in training datasets, and set out to reduce the importance of “emotional information” in face biometrics.
To solve this problem, they suggest two different methods for algorithms to learn “emotional-blinded face representations.” One, which they call “SensitiveNets,” involves learning a discriminator and an “adversarial regularizer to reduce facial expression information.” In the other, “Learning not to Learn,” uses a pre-trained facial expression classifier to avoid the area it analyzes.
For the vulnerability test, the researchers employed three facial recognition models and the Compound Facial Expressions of Emotion database, Extended Cohn-Kanade, CelebA and MS-Celeb-1M, which all tend to have mostly neutral expressions, and also have more happy faces than sad ones, for example.
They found that while facial expression does not affect negative matches, or imposters, it can reduce the performance of genuine comparisons by up to 40 percent. Many “Facial Action Units” affect genuine matching scores significantly.
This can be changed, however, through the use of more balanced datasets, in terms of facial expressions, and by applying other bias-reduction methods, referring specifically to recent research from a trio of Michigan State University researchers.
Turing Institute calls for society to take back control from technologists
An explainer by Dr. David Leslie of The Alan Turing Institute, ‘Understanding bias in facial recognition technologies,’ addresses the potential human rights risks associated with facial detection and recognition technologies (“FDRTs”).
Leslie summarizes the pro and anti facial recognition camps, and notes that face biometrics seem to be here to stay. The impression of unavoidable “coming pervasiveness” of facial recognition, however, is a problem, Leslie states. This is because a focus on remediation has avoided important dialogue about “more basic ethical concerns,” because the technology has proliferated unevenly, with problems from disproportionately helping the world’s already-privileged to bias and discrimination, and because the inevitability is false.
In the end, he mostly sides with what he calls at one point the “increasingly strident chorus of critical voices” against facial recognition, calling for “members of society writ large” to jointly decide on the permissibility of the technology. Leslie provides three suggestions as a minimum starting point for restoring technology governance to society (as opposed to practical self-governance); robust governance mechanisms for transparency and accountability, strong privacy preservation, consent and notice guarantees, and bias-mitigation measures, discrimination-aware design, and related benchmarking.
Article Topics
accuracy | algorithmic transparency | biometric identification | biometrics | biometrics research | dataset | emotion recognition | ethics | expression recognition | facial recognition | Turing Institute
Comments