FB pixel

Algorithmic schmutz hurts detection of male face more than female

Algorithmic schmutz hurts detection of male face more than female
 

A research paper looking at how well the three best-known biometric face detection algorithms are likely to work outside the lab found both intuitive and disappointing results.

Three University of Maryland scientists say Google, Microsoft and Amazon software had a harder time detecting — not recognizing — intentionally corrupted faces in large image datasets.

While not surprising to many, despite industry boosterism of AI capabilities, some kinds of faces were more easily detected in the biometrics research than others. Indeed, it appears that masculine-presenting faces were more readily hidden from algorithmic detection.

The researchers say they have developed the first ever detail benchmark for how robust Amazon’s Rekognition, Microsoft’s Azure and Google’s Cloud Platform are in real-world situations.

Images from four datasets, including Adience, UTKFace, MIAP and CCD, were marred by 15 algorithmically generated corruptions. The imposed defects included pixelation, motion blur, Gaussian noise, fog, frost and JPEG compression.

Well-lit images of feminine-presenting subjects with lighter skin types faired best in face detection. Those of older masculine-presenting subjects with darker skin were recognized least often.

The researchers did not report on what caused the errors. There is no examination of the vendors’ robustness in the face of adversarial attacks or varied camera capabilities. Nor do they look into how algorithms were trained.

Generally, decisions on images of masculine-presented subjects in the MIAP dataset were 20 percent more likely to be erroneous than those involving feminine-presenting subjects. The UTKFace dataset produced the best results involving gender, according to the paper, with statistically insignificant differences between masculine and feminine.

Overall, images of the oldest two demographic groups were 25 percent more likely to be erroneously detected using than the youngest two groups in the Adience dataset.

And consistent with many biometrics test results, the paper found that images with lighter-skin subjects (based on the controversial Fitzpatrick scale) had a mean relative corruption error rate of 8.5 percent and the error rate for darker-skin was 9.7 percent.

Related Posts

Article Topics

 |   |   |   |   |   |   |   |   |   | 

Latest Biometrics News

 

Face biometrics use cases outnumbered only by important considerations

With face biometrics now used regularly in many different sectors and areas of life, stakeholders are asking questions about a…

 

Biometric Update Podcast explores identification at scale using browser fingerprinting

“Browser fingerprinting is this idea that modern browsers are so complex.” So says Valentin Vasilyev, Chief Technology Officer of Fingerprint,…

 

Passkeys now pervasive but passwords persist in enterprise authentication

Passkeys are here; now about those passwords. Specifically, passkeys are now prevalent in the enterprise, the FIDO Alliance says, with…

 

Pornhub returns to UK, but only for iOS users who verify age with Apple

In the UK, “wanker” is not typically a term of endearment. However, the case may be different for Pornhub, which…

 

Europol operated ‘shadow’ IT systems without data safeguards: Report

Europol has operated secret data analysis platforms containing large amounts of personal information, such as identity documents, without the security…

 

EU pushes AI Act deadlines for high-risk systems, including biometrics

The EU has reached a provisional agreement on changes to the AI Act that postpone rules on high-risk AI systems,…

Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Biometric Market Analysis and Buyer's Guides

Most Viewed This Week

Featured Company

Biometrics Insight, Opinion

Digital ID In-Depth

Biometrics White Papers

Biometrics Events