FB pixel

New paper explores facial recognition biases beyond demographics

New paper explores facial recognition biases beyond demographics
 

The well-reported biases surrounding skin tone and sex are not the only biases held by facial recognition systems, finds a new paper, which calls for far more development of face biometric systems in order for them to be fair.

A Comprehensive Study on Face Recognition Biases Beyond Demographics’ from a German-Spanish team tested the FaceNet and ArcFace facial recognition models with the MAAD-face dataset which has more than 120 million attribute annotations for 3.3 million face images, to see whether the models returned biases beyond explicit demographics such as age, sex and skin tone.

The team also tested non-explicit demographic attributes such as accessories, hairstyles and colors, face shapes, facial anomalies and make-up.

The early release of the full paper, which could be subject to further editing on final publication, includes the results for the different attributes tested, generating revealing graphs for the level of bias for both FaceNet and ArcFace, plotting the attributes which lead to bias causing a degraded level of biometric recognition, and those that lead to enhanced recognition performance.

Having a moustache, goatee, round face, an obstructed forehead or rosy cheeks or wearing lipstick or glasses can all lead to degraded recognition. Having gray hair improves it, as does having a beard or even just 5 o’clock shadow compared to no beard.

The authors were able to explain some of the reasons behind the results, but not all. “The findings of this work strongly demonstrate the need for further advances in making face recognition systems more robust, explainable, and fair. We hope these findings lead to the development of more robust and unbiased face recognition solutions,” concludes the paper.

AnyVision recently called on companies developing biometrics and AI algorithms to remove demographic bias, in response to the to the U.S. National Institute of Standards and Technology’s (NIST’s) call for public comment on its proposed method for evaluating user trust in AI systems. OpenAI has admitted demographic biases in its new computer vision model.

Article Topics

 |   |   |   |   |   |   | 

Latest Biometrics News

 

Canada regulator backs privacy-preserving age assurance

The Office of the Privacy Commissioner of Canada (OPC) has published a policy note and guidance documents pertaining to age…

 

FCC seeks comment on KYC revision for commercial phone calls

The U.S. Federal Communications Commission (FCC) has proposed stronger KYC requirements for voice service providers to prevent scams and illegal…

 

Deepfake detection upgrade for Sumsub highlights continuous self-improvement

Sumsub has launched an upgrade to its deepfake detection product with instant online self-learning updates to address rapidly evolving fraud…

 

Metalenz debuts under-display camera for payment-grade face authentication

Unlocking a smartphone with your face used to require a camera placed in a notch or a punch hole in…

 

UK regulators pan patchwork policy for law enforcement facial recognition

The UK’s two Biometrics Commissioners shared cautionary observations about the use of facial recognition in law enforcement over the weekend…

 

IDV spending to hit $29B by 2030 as DPI projects scale: Juniper Research

Spending on digital identity verification (IDV) technology is projected to reach a 55 percent growth rate between now and 2030,…

Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Biometric Market Analysis and Buyer's Guides

Most Viewed This Week

Featured Company

Biometrics Insight, Opinion

Digital ID In-Depth

Biometrics White Papers

Biometrics Events