Computer vision modelers take too much for granted. Data sets hold bias surprises

A pair of U.S. researchers say unsupervised computer vision models used in biometrics and other applications can learn nasty social biases from the way that people are portrayed on the internet, the source of large numbers of training images.
The scientists say they know this because they created what they say is the first systematic way to detect and quantify social bias — including skin tone — in unsupervised image models. In fact, they claim to have replicated eight of 15 human biases in their experiments.
The research has been posted on a preprint server by Ryan Steed at Carnegie Mellon University and Aylin Caliskan, with George Washington University.
Statistically significant gender, racial, body size and intersectional biases were found in a pair of state-of-the-art image models– iGPT and SimCLRv2– that were pre-trained on ImageNet.
As noted by VentureBeat, ImageNet is a popular image data set scraped from web pages. It also is “problematic,” according to the corporate-finance publisher.
Business magazine Fast Company looked at ImageNet’s 3,000 categories for people and found “bad person,” “wimp,” “drug addict” and the like.
The authors concluded that developers have been lulled into complacency when it comes to training vision models for facial recognition and other tasks because of advances in natural language processing. Garbage data exists in image data sets, and systems are not filtering it or even alerting data scientists and developers to its presence.
The paper warns the community that “pre-trained models may embed all types of harmful human biases from the way people are portrayed in training data.” Choices made in model design “determine whether and how those biases are propagated into harms downstream.”
Article Topics
algorithms | biometric-bias | biometrics | biometrics research | computer vision | dataset | facial recognition | training
Comments