FB pixel

Facial analysis still biased, press still confusing it with facial recognition

Facial analysis still biased, press still confusing it with facial recognition
 

An assistant professor at the University of Maryland performed an audit of leading facial recognition services Government Technology claims, and found significant racial bias, with match rates . . . not reported.

The article is latest example of media conflating facial analysis with facial recognition. Indeed, the article notes that it was “emotion-recognition technology within three facial recognition services” that was tested.

The researcher, Lauren Rhue, is not quoted mistakenly using the term “recognition,” but rather warning that we should ask “do we need to analyze faces in this way?”

The author of the article, however, uses the term “facial recognition” six times within the article, and refers to real-world face biometric matching applications, but makes no attempt to differentiate between software addressing the question “are these images of the same person” and “what is this person feeling.”

Emotion recognition is considered closer to phrenology than biometrics by some AI researchers.

The article was originally printed in the Baltimore Sun. This means readers of both that publication and GovTech are at risk of being misled by the same kind of imprecise language that biometrics industry insiders, and scientists in general, have long warned against.

A similar study was famously conducted by Joy Buolamwini in 2018, but Buolamwini was clear that facial analysis software was used, and drew more general observations about AI and biometrics from there. Those observations have also born out as generally accurate.

Millions of dollars and the time of hundreds of researchers dedicated to addressing the problem of demographic disparities, or bias, in facial recognition since, with significant progress. Generalizing from “emotion recognition” to face biometrics has become misleading.

Arguably worse are the false parallels drawn between the results and the use of biometrics at the Port of Baltimore, which is not using emotion recognition from the providers studied or any others to match faces.

The question Rhue asks above is reasonable, and the answer is likely ‘no.’ Work remains to be done both to clarify the extent of and eliminate bias in machine learning and related fields.

Work also remains on educating the public about what facial recognition means, and what it does not.

Related Posts

Article Topics

 |   |   |   |   |   |   | 

Latest Biometrics News

Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Most Read This Week

Featured Company

Biometrics Research

Biometrics White Papers

Biometrics Events

Explaining Biometrics