Tool pushes AI community’s nose in algorithms’ mess

An AI researcher has created a tool that confronts viewers with the bias built into artificial intelligence algorithms.
Sasha Luccioni created her tool for Hugging Face, a self-described AI community “on a mission to democratize good machine learning.” Luccioni chose Stability AI’s Stable Diffusion to check for bias.
Visitors are invited to type words and short, descriptive phrases that Stable Diffusion illustrates. Specifically, visitors are invited to submit information that, if the algorithm were not biased, would present reality.
Instead, it builds images, four at a time, that better represent the internet’s constructed reality – sexist, racist, ageist and classist results. (Stable Diffusion also has an unhealthy and unrealistic obsession with fingers sprouting in great numbers from palms.)
It is not like the seedier corners of the Web. The algorithm does not only show childish and hateful images. It is just that some demographics are underrepresented.
And results are uneven.
“Surgeon” results in many images of women, if almost all white in appearance. “Nurse” brings mostly female depictions. One of them was of a male doctor talking to a female nurse who appears to be crying.
For better or worse, drug abusers are white or likely white in Stable Diffusion’s imagination.
Luccioni probably has seen a thing or two in this regard. She is a postdoctoral researcher with the Université de Montréal and has been an AI research scientist for Nuance Communications and Morgan Stanley in a career dating back to 2017.
In an interview with tech news and culture publication Gizmodo, Luccioni considered putting OpenAI‘s DALL-E2 to the test, but Stable Diffusion is a more open and “less regulated platform.”
The article makes it clear that OpenAI has spoken openly about bias in DALL-E2.
Luccioni has created other bias tools, including one that scores submitted algorithms. It would be beneficial for everyone if that tool could enjoy the cultural splash that the AI algorithms are creating.
If everyone was focused on using tools to make these algorithms less biased, the developers would have less guesswork, which means the code is created more efficiently and AI fence sitters would feel more comfortable getting involved and there would be more accountability within the community. General people could feel better about AI, and lawmakers would likely feel braver in regulating use instead of banning it.
Article Topics
AI | algorithms | biometric-bias | Hugging Face | image recognition | research and development
Comments