FB pixel

Persistent AI bias examined with facial recognition water gun and other initiatives

 

artificial-intelligence

An artist has developed a water gun which identifies targets with biometric facial recognition the European Commission’s SHERPA project on ethics in machine learning to show how algorithmic bias can lead to discrimination and unfair treatment, Horizon reports.

The demonstration comes just as Vice Motherboard reports the AI Portrait Ars app developed by the MIT-IBM Watson AI Lab ‘whitewashes’ images, in another case of apparent algorithmic racial bias. Mashable reporter Morgan Sung found the app significantly changed both her skin tone and facial features to make her seem more Caucasian in the portrait.

The app uses a generative adversarial network (GAN) to create portraits with generative and discriminator algorithms. IBM Research told Motherboard in a statement that the tool reflects the dataset, which was mostly drawn from “a collection of 15,000 portraits, predominantly from the Western European Renaissance period.” The company also says the strong alteration of colors and shapes is reflective of the Renaissance portrait style.

The app represents another instance in a growing list of biometric systems and algorithms working less well for non-white people.

Professor Bernd Stahl, SHERPA project leader, and director of the Centre for Computing and Social Responsibility at De Montfort University in Leicester, UK, says AI researchers should be aware by now of the ethical implications of their algorithms.

“Our artist has built a water gun with a face recognition on it so it will only squirt water at women or it can be changed to recognise a single individual or people of a certain age,” says Stahl. “The idea is to get people to think about what this sort of technology can do.”

The SHERPA team has conducted 10 empirical case studies on AI use in a range of sectors, and Stahl expresses concern about the impact of algorithms on people’s right to work and to free elections. Predictive policing is another field in which algorithmic bias could be particularly harmful. Stahl notes that transparency is also a significant part of the problem.

The EC has also launched the SIENNA project to develop recommendations and codes on conduct for emerging technologies, including AI. SIENNA is conducting workshops, expert consultations, and public opinion surveys, and is preparing to produce recommendations.

“If we don’t get the ethics right, then people are going to refuse to use it and that will annihilate any technical progress,” Stahl says.

Article Topics

 |   |   |   |   | 

Latest Biometrics News

 

Biometrics developers dance with data privacy regulations continues

Biometrics controversy and investments are often found side by side, as seen in many of this week’s top stories on…

 

EU AI Act should revise its risk-based approach: Report

Another voice has joined the chorus criticizing the European Union’s Artificial Intelligence Act, this time arguing that important provisions of…

 

Swiss e-ID resists rushing trust infrastructure

Switzerland is debating on how to proceed with the technical implementation of its national digital identity as the 2026 deadline…

 

Former Jumio exec joins digital ID web 3.0 project

Move over Worldcoin, there’s a new kid on the block vying for the attention of the digital identity industry and…

 

DHS audit urges upgrade of biometric vetting for noncitizens and asylum seekers

A recent audit by the DHS Office of Inspector General (OIG) has called for the Department of Homeland Security (DHS)…

 

Researchers spotlight Russia’s opaque facial recognition surveillance system

In recent years, Russia has been attracting attention for its use of facial recognition surveillance to track down protestors, opposition…

Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Most Read This Week

Featured Company

Biometrics Insight, Opinion

Digital ID In-Depth

Biometrics White Papers

Biometrics Events