FB pixel

Report says lack of diversity in face biometrics datasets extends to expression, emotion

Turing Institute calls for facial recognition technology governance
 

biometric facial recognition

A pair of new studies on algorithmic bias in face biometrics both suggest that the lack of diversity in datasets needs to be addressed. Research on expression imbalance suggests a way to make the problem better, but an explainer from the Turing Institute calls for pushback against the seemingly inevitable proliferation of facial recognition.

An academic study on ‘Facial Expressions as a Vulnerability in Face Recognition’ from four researchers associated with MIT, Barcelona’s Universitat Oberta de Catalunya, and the Universidad Autonoma de Madrid, suggests that databases with greater balance of facial expressions should be used to train facial recognition models.

The lack of diverse expressions could create a security vulnerability, the researchers suggest, impacting the matching scores returned by facial recognition systems.

The paper builds on their previous work on ‘Learning Emotional-Blinded Face Representations’, which described the imbalance between facial expressions in training datasets, and set out to reduce the importance of “emotional information” in face biometrics.

To solve this problem, they suggest two different methods for algorithms to learn “emotional-blinded face representations.” One, which they call “SensitiveNets,” involves learning a discriminator and an “adversarial regularizer to reduce facial expression information.” In the other, “Learning not to Learn,” uses a pre-trained facial expression classifier to avoid the area it analyzes.

For the vulnerability test, the researchers employed three facial recognition models and the Compound Facial Expressions of Emotion database, Extended Cohn-Kanade, CelebA and MS-Celeb-1M, which all tend to have mostly neutral expressions, and also have more happy faces than sad ones, for example.

They found that while facial expression does not affect negative matches, or imposters, it can reduce the performance of genuine comparisons by up to 40 percent. Many “Facial Action Units” affect genuine matching scores significantly.

This can be changed, however, through the use of more balanced datasets, in terms of facial expressions, and by applying other bias-reduction methods, referring specifically to recent research from a trio of Michigan State University researchers.

Turing Institute calls for society to take back control from technologists

An explainer by Dr. David Leslie of The Alan Turing Institute, ‘Understanding bias in facial recognition technologies,’ addresses the potential human rights risks associated with facial detection and recognition technologies (“FDRTs”).

Leslie summarizes the pro and anti facial recognition camps, and notes that face biometrics seem to be here to stay. The impression of unavoidable “coming pervasiveness” of facial recognition, however, is a problem, Leslie states. This is because a focus on remediation has avoided important dialogue about “more basic ethical concerns,” because the technology has proliferated unevenly, with problems from disproportionately helping the world’s already-privileged to bias and discrimination, and because the inevitability is false.

In the end, he mostly sides with what he calls at one point the “increasingly strident chorus of critical voices” against facial recognition, calling for “members of society writ large” to jointly decide on the permissibility of the technology. Leslie provides three suggestions as a minimum starting point for restoring technology governance to society (as opposed to practical self-governance); robust governance mechanisms for transparency and accountability, strong privacy preservation, consent and notice guarantees, and bias-mitigation measures, discrimination-aware design, and related benchmarking.

Article Topics

 |   |   |   |   |   |   |   |   |   | 

Latest Biometrics News

 

Canada regulator backs privacy-preserving age assurance

The Office of the Privacy Commissioner of Canada (OPC) has published a policy note and guidance documents pertaining to age…

 

FCC seeks comment on KYC revision for commercial phone calls

The U.S. Federal Communications Commission (FCC) has proposed stronger KYC requirements for voice service providers to prevent scams and illegal…

 

Deepfake detection upgrade for Sumsub highlights continuous self-improvement

Sumsub has launched an upgrade to its deepfake detection product with instant online self-learning updates to address rapidly evolving fraud…

 

Metalenz debuts under-display camera for payment-grade face authentication

Unlocking a smartphone with your face used to require a camera placed in a notch or a punch hole in…

 

UK regulators pan patchwork policy for law enforcement facial recognition

The UK’s two Biometrics Commissioners shared cautionary observations about the use of facial recognition in law enforcement over the weekend…

 

IDV spending to hit $29B by 2030 as DPI projects scale: Juniper Research

Spending on digital identity verification (IDV) technology is projected to reach a 55 percent growth rate between now and 2030,…

Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Biometric Market Analysis and Buyer's Guides

Most Viewed This Week

Featured Company

Biometrics Insight, Opinion

Digital ID In-Depth

Biometrics White Papers

Biometrics Events