FB pixel

Report says lack of diversity in face biometrics datasets extends to expression, emotion

Turing Institute calls for facial recognition technology governance

biometric facial recognition

A pair of new studies on algorithmic bias in face biometrics both suggest that the lack of diversity in datasets needs to be addressed. Research on expression imbalance suggests a way to make the problem better, but an explainer from the Turing Institute calls for pushback against the seemingly inevitable proliferation of facial recognition.

An academic study on ‘Facial Expressions as a Vulnerability in Face Recognition’ from four researchers associated with MIT, Barcelona’s Universitat Oberta de Catalunya, and the Universidad Autonoma de Madrid, suggests that databases with greater balance of facial expressions should be used to train facial recognition models.

The lack of diverse expressions could create a security vulnerability, the researchers suggest, impacting the matching scores returned by facial recognition systems.

The paper builds on their previous work on ‘Learning Emotional-Blinded Face Representations’, which described the imbalance between facial expressions in training datasets, and set out to reduce the importance of “emotional information” in face biometrics.

To solve this problem, they suggest two different methods for algorithms to learn “emotional-blinded face representations.” One, which they call “SensitiveNets,” involves learning a discriminator and an “adversarial regularizer to reduce facial expression information.” In the other, “Learning not to Learn,” uses a pre-trained facial expression classifier to avoid the area it analyzes.

For the vulnerability test, the researchers employed three facial recognition models and the Compound Facial Expressions of Emotion database, Extended Cohn-Kanade, CelebA and MS-Celeb-1M, which all tend to have mostly neutral expressions, and also have more happy faces than sad ones, for example.

They found that while facial expression does not affect negative matches, or imposters, it can reduce the performance of genuine comparisons by up to 40 percent. Many “Facial Action Units” affect genuine matching scores significantly.

This can be changed, however, through the use of more balanced datasets, in terms of facial expressions, and by applying other bias-reduction methods, referring specifically to recent research from a trio of Michigan State University researchers.

Turing Institute calls for society to take back control from technologists

An explainer by Dr. David Leslie of The Alan Turing Institute, ‘Understanding bias in facial recognition technologies,’ addresses the potential human rights risks associated with facial detection and recognition technologies (“FDRTs”).

Leslie summarizes the pro and anti facial recognition camps, and notes that face biometrics seem to be here to stay. The impression of unavoidable “coming pervasiveness” of facial recognition, however, is a problem, Leslie states. This is because a focus on remediation has avoided important dialogue about “more basic ethical concerns,” because the technology has proliferated unevenly, with problems from disproportionately helping the world’s already-privileged to bias and discrimination, and because the inevitability is false.

In the end, he mostly sides with what he calls at one point the “increasingly strident chorus of critical voices” against facial recognition, calling for “members of society writ large” to jointly decide on the permissibility of the technology. Leslie provides three suggestions as a minimum starting point for restoring technology governance to society (as opposed to practical self-governance); robust governance mechanisms for transparency and accountability, strong privacy preservation, consent and notice guarantees, and bias-mitigation measures, discrimination-aware design, and related benchmarking.

Article Topics

 |   |   |   |   |   |   |   |   |   | 

Latest Biometrics News


The UK’s election may spell out the future of its national ID cards

Identity cards are back among the UK’s top controversial topics – thanks to the upcoming elections and its focus on…


Challenges in face biometrics addressed with new tech and research amid high stakes

Big biometrics contracts and deals were the theme of several of the stories on that drew the most interest from…


Online age verification debates continue in Canada, EU, India

Introducing age verification to protect children online remains a hot topic across the globe: Canada is debating the Online Harms…


Login.gov adds selfie biometrics for May pilot

America’s single-sign on system for government benefits and services, Login.gov, is getting a face biometrics option for enhanced identity verification…


BIPA one step closer to seeing its first major change since 2008 inception

On Thursday, a bipartisan majority in the Illinois Senate approved the first major change to Illinois Biometric Information Privacy Act…


Identity verification industry mulls solutions to flood of synthetic IDs

The advent of AI-powered generators such as OnlyFake, which creates realistic-looking photos of fake IDs for only US$15, has stirred…


Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Most Read From This Week

Featured Company

Biometrics Insight, Opinion

Digital ID In-Depth

Biometrics White Papers

Biometrics Events