FB pixel

Persistent AI bias examined with facial recognition water gun and other initiatives

 

artificial-intelligence

An artist has developed a water gun which identifies targets with biometric facial recognition the European Commission’s SHERPA project on ethics in machine learning to show how algorithmic bias can lead to discrimination and unfair treatment, Horizon reports.

The demonstration comes just as Vice Motherboard reports the AI Portrait Ars app developed by the MIT-IBM Watson AI Lab ‘whitewashes’ images, in another case of apparent algorithmic racial bias. Mashable reporter Morgan Sung found the app significantly changed both her skin tone and facial features to make her seem more Caucasian in the portrait.

The app uses a generative adversarial network (GAN) to create portraits with generative and discriminator algorithms. IBM Research told Motherboard in a statement that the tool reflects the dataset, which was mostly drawn from “a collection of 15,000 portraits, predominantly from the Western European Renaissance period.” The company also says the strong alteration of colors and shapes is reflective of the Renaissance portrait style.

The app represents another instance in a growing list of biometric systems and algorithms working less well for non-white people.

Professor Bernd Stahl, SHERPA project leader, and director of the Centre for Computing and Social Responsibility at De Montfort University in Leicester, UK, says AI researchers should be aware by now of the ethical implications of their algorithms.

“Our artist has built a water gun with a face recognition on it so it will only squirt water at women or it can be changed to recognise a single individual or people of a certain age,” says Stahl. “The idea is to get people to think about what this sort of technology can do.”

The SHERPA team has conducted 10 empirical case studies on AI use in a range of sectors, and Stahl expresses concern about the impact of algorithms on people’s right to work and to free elections. Predictive policing is another field in which algorithmic bias could be particularly harmful. Stahl notes that transparency is also a significant part of the problem.

The EC has also launched the SIENNA project to develop recommendations and codes on conduct for emerging technologies, including AI. SIENNA is conducting workshops, expert consultations, and public opinion surveys, and is preparing to produce recommendations.

“If we don’t get the ethics right, then people are going to refuse to use it and that will annihilate any technical progress,” Stahl says.

Article Topics

 |   |   |   |   | 

Latest Biometrics News

 

UK security industry should complete OneLogin adoption to save money

Gov.uk OneLogin should be in place for the UK’s the Security Industry Authority (SIA) by the fourth quarter of the…

 

Fraud intelligence software launched by Facephi, Feedzai

Facephi and Feedzai have introduced new fraud prevention products to complement their biometrics offerings. Smart Eye Technology and Resistant AI…

 

Documents aim to lay out use cases, standards for mobile driver’s licenses

The Secure Technology Alliance’s (STA) Identity and Access Forum has released a new resource on mobile driver’s license (mDL) use…

 

Brazil regulator demands details on stadium biometrics

Brazil’s National Data Protection Authority (ANPD) is asking for data protection impact assessment reports from 23 clubs that have deployed…

 

Clearview seeks refund for failed bulk purchase of SSNs, facial photos

Clearview AI is in the midst of a court battle to recover money it paid to a data broker for…

 

Vietnam PM urges all airports to use biometric authentication

The Prime Minister of Vietnam Pham Minh Chinh has requested all airports and border gates to use biometric authentication for…

Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Most Viewed This Week

Featured Company

Biometrics Insight, Opinion

Digital ID In-Depth

Biometrics White Papers

Biometrics Events