FB pixel

Algorithmic schmutz hurts detection of male face more than female

Algorithmic schmutz hurts detection of male face more than female
 

A research paper looking at how well the three best-known biometric face detection algorithms are likely to work outside the lab found both intuitive and disappointing results.

Three University of Maryland scientists say Google, Microsoft and Amazon software had a harder time detecting — not recognizing — intentionally corrupted faces in large image datasets.

While not surprising to many, despite industry boosterism of AI capabilities, some kinds of faces were more easily detected in the biometrics research than others. Indeed, it appears that masculine-presenting faces were more readily hidden from algorithmic detection.

The researchers say they have developed the first ever detail benchmark for how robust Amazon’s Rekognition, Microsoft’s Azure and Google’s Cloud Platform are in real-world situations.

Images from four datasets, including Adience, UTKFace, MIAP and CCD, were marred by 15 algorithmically generated corruptions. The imposed defects included pixelation, motion blur, Gaussian noise, fog, frost and JPEG compression.

Well-lit images of feminine-presenting subjects with lighter skin types faired best in face detection. Those of older masculine-presenting subjects with darker skin were recognized least often.

The researchers did not report on what caused the errors. There is no examination of the vendors’ robustness in the face of adversarial attacks or varied camera capabilities. Nor do they look into how algorithms were trained.

Generally, decisions on images of masculine-presented subjects in the MIAP dataset were 20 percent more likely to be erroneous than those involving feminine-presenting subjects. The UTKFace dataset produced the best results involving gender, according to the paper, with statistically insignificant differences between masculine and feminine.

Overall, images of the oldest two demographic groups were 25 percent more likely to be erroneously detected using than the youngest two groups in the Adience dataset.

And consistent with many biometrics test results, the paper found that images with lighter-skin subjects (based on the controversial Fitzpatrick scale) had a mean relative corruption error rate of 8.5 percent and the error rate for darker-skin was 9.7 percent.

Related Posts

Article Topics

 |   |   |   |   |   |   |   |   |   | 

Latest Biometrics News

 

Cybastion to support digital infrastructure development in DRC

U.S. digital ID and cybersecurity firm Cybastion will deploy its technology and expertise in support of the Democratic Republic of…

 

Tanzania seeks biometrics contractors for Phase II of national digital ID project

Tanzania says it is seeking contractors for some activities related to the execution of Phase II of the country’s national…

 

Smart glasses and the new DHS surveillance budget

The Department of Homeland Security’s (DHS) Fiscal Year (FY) 2027 budget justification lays out an expansive biometric and identity tech…

 

Voice AI expands attack surface for speaker biometrics as APIs proliferate

Deepfake voices are already a challenge for authentication systems. But the task is getting tougher, as big players pursue voice…

 

NetChoice wins in Arkansas, but faces forever war against age assurance

The battle over age assurance legislation in the United States has reached its next level. As the global tide turns…

 

UIDAI selects 20 bug bounty hunters to bolster India’s digital ID security

The Unique Identification Authority of India (UIDAI) has launched a structured bug bounty program. The authority will open its core…

Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Biometric Market Analysis and Buyer's Guides

Most Viewed This Week

Featured Company

Biometrics Insight, Opinion

Digital ID In-Depth

Biometrics White Papers

Biometrics Events