The internet’s small-mindedness makes AI brutish as well
If bias cannot be wrung from the process of training biometrics and other AI algorithms, maybe unsupervised algorithms can learn about their tasks free of harmful biases, right?
Wrong. Researchers have published a non-peer reviewed paper describing how they “observe[d] human-like biases in the majority of our tests.”
Ongoing academic debates, industry introspection and anti-bias programs could succeed in countering the embedding of negative stereotypes of and by people, making AI closer to the ideal of just decisions.
In the meantime, some think, unsupervised algorithms can train themselves by deciding how data in large sets is related.
That might not be possible, according to the paper, written by a pair of researchers from Carnegie Mellon University and George Washington University. The scientists found that two general purpose machine learning models exhibited clear biases after they trained unsupervised on the ubiquitous computer-vision data set ImageNet.
The problem is that the internet, ImageNet’s source of photographs, is harmfully biased.
After pre-training, OpenAI’s iGPT and Alphabet’s SimCLRv2 both exhibited stereotyped assumptions.
The researchers told the models to create bodies for headshots, and the results were disheartening.
Images of women’s faces were “completed” with automatically fashioned torsos, often drawn with revealing clothing. White male faces often were completed with suits or blue-collar garb. One Black male’s headshot was completed with a body holding a weapon.
The iGPT model, according to the researchers, associated a thin physique with “pleasantness” and overweight bodies with “unpleastantness.”
Article Topics
AI | algorithms | biometrics | computer vision | dataset | machine learning | training
Comments