Persistent AI bias examined with facial recognition water gun and other initiatives
An artist has developed a water gun which identifies targets with biometric facial recognition the European Commission’s SHERPA project on ethics in machine learning to show how algorithmic bias can lead to discrimination and unfair treatment, Horizon reports.
The demonstration comes just as Vice Motherboard reports the AI Portrait Ars app developed by the MIT-IBM Watson AI Lab ‘whitewashes’ images, in another case of apparent algorithmic racial bias. Mashable reporter Morgan Sung found the app significantly changed both her skin tone and facial features to make her seem more Caucasian in the portrait.
The app uses a generative adversarial network (GAN) to create portraits with generative and discriminator algorithms. IBM Research told Motherboard in a statement that the tool reflects the dataset, which was mostly drawn from “a collection of 15,000 portraits, predominantly from the Western European Renaissance period.” The company also says the strong alteration of colors and shapes is reflective of the Renaissance portrait style.
The app represents another instance in a growing list of biometric systems and algorithms working less well for non-white people.
Professor Bernd Stahl, SHERPA project leader, and director of the Centre for Computing and Social Responsibility at De Montfort University in Leicester, UK, says AI researchers should be aware by now of the ethical implications of their algorithms.
“Our artist has built a water gun with a face recognition on it so it will only squirt water at women or it can be changed to recognise a single individual or people of a certain age,” says Stahl. “The idea is to get people to think about what this sort of technology can do.”
The SHERPA team has conducted 10 empirical case studies on AI use in a range of sectors, and Stahl expresses concern about the impact of algorithms on people’s right to work and to free elections. Predictive policing is another field in which algorithmic bias could be particularly harmful. Stahl notes that transparency is also a significant part of the problem.
The EC has also launched the SIENNA project to develop recommendations and codes on conduct for emerging technologies, including AI. SIENNA is conducting workshops, expert consultations, and public opinion surveys, and is preparing to produce recommendations.
“If we don’t get the ethics right, then people are going to refuse to use it and that will annihilate any technical progress,” Stahl says.
Article Topics
algorithms | biometrics | ethics | facial recognition | machine learning | research and development
Comments