FB pixel

Computer vision modelers take too much for granted. Data sets hold bias surprises

Computer vision modelers take too much for granted. Data sets hold bias surprises
 

A pair of U.S. researchers say unsupervised computer vision models used in biometrics and other applications can learn nasty social biases from the way that people are portrayed on the internet, the source of large numbers of training images.

The scientists say they know this because they created what they say is the first systematic way to detect and quantify social bias — including skin tone — in unsupervised image models. In fact, they claim to have replicated eight of 15 human biases in their experiments.

The research has been posted on a preprint server by Ryan Steed at Carnegie Mellon University and Aylin Caliskan, with George Washington University.

Statistically significant gender, racial, body size and intersectional biases were found in a pair of state-of-the-art image models– iGPT and SimCLRv2– that were pre-trained on ImageNet.

As noted by VentureBeat, ImageNet is a popular image data set scraped from web pages. It also is “problematic,” according to the corporate-finance publisher.

Business magazine Fast Company looked at ImageNet’s 3,000 categories for people and found “bad person,” “wimp,” “drug addict” and the like.

The authors concluded that developers have been lulled into complacency when it comes to training vision models for facial recognition and other tasks because of advances in natural language processing. Garbage data exists in image data sets, and systems are not filtering it or even alerting data scientists and developers to its presence.

The paper warns the community that “pre-trained models may embed all types of harmful human biases from the way people are portrayed in training data.” Choices made in model design “determine whether and how those biases are propagated into harms downstream.”

Article Topics

 |   |   |   |   |   |   | 

Latest Biometrics News

 

Microsoft teases easy video deepfake tool, declines to release it

Microsoft is the latest tech giant to tease an AI product so good at producing deepfake humans that it poses…

 

Proposed UK data protection law hit by criticism

The UK’s prospective data protection law is facing criticism from lawmakers and rights groups over plans to introduce bank account…

 

Paris Olympics: Second AI surveillance test completed

In preparation for the Olympics, Paris has tested its AI surveillance systems deploying crowd surveillance technology during two large events….

 

LinkedIn introduces biometric IDV in Singapore with Persona

LinkedIn is rolling out identity verification using biometric passports in Singapore. The feature is provided by San Francisco-based selfie biometrics…

 

Apple to placate EU by opening developer access to NFC scanning, Face ID

Apple’s tap-and-go mobile payments system, ID scanning and biometric capabilities should be available to rivals soon, with a report from…

 

Biometric payment cards launching in Bangladesh, approved by Mastercard

Idex Biometrics is supplying fingerprint biometric technology for payment cards to Bangladesh-based Mutual Trust Bank, as the biometrics provider expands…

Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Most Read From This Week

Featured Company

Biometrics Insight, Opinion

Digital ID In-Depth

Biometrics White Papers

Biometrics Events