Facial recognition needs auditing and ethics standards to be safe, AI Now bias critic argues
The artificial intelligence community needs to begin developing the vocabulary to define and clearly explain the harms the technology can cause, in order to reign in abuses with facial biometrics, AI Now Institute Technology Fellow Deb Raji argues in a TWIML AI podcast.
The podcast on “How External Auditing is Changing the Facial Recognition Landscape with Deb Raji,” hosted by Sam Charrington, who asks about the genesis of the audits Raji and colleagues have performed of biometric facial recognition systems, industry response, and the ethical way forward.
Raji describes her journey through academia and an internship with Clarifai to taking up the cause of algorithmic bias and connecting with Joy Buolamwini after watching her TedTalk. The work Raji did with others in the community gained prominence with Gender Shades, and concepts that emerged from that and similar projects have been built into engineering practices at Google.
Facial recognition is characterized as “very immature technology,” which was exposed as not working by the Gender Shades study.
“It really sort of stemmed from this desire to…identify the problem in a consistent way and communicate it in a consistent way,” Raji says of the early work delineating the problem of demographic differentials in facial recognition.
Raji won an AI Innovation Award, along with Buolamwini and Timnit Gebru, for their work in 2019.
The problem was hardly understood at all when Raji first began bringing it up, and even know seems to be fully comprehended by few in the community, as Raji says is demonstrated by a recent Twitter argument between Yann Lecun and Gebru. Raji comments that the connection between research efforts like Lecun’s and products should be very clear to him. Raji also pans his downplaying of what she calls “procedural negligence” by not including people of color in the testing.
Representation does not necessarily mean that the training dataset demographics mirror the society the model is being deployed in. Raji notes that if 10 percent of the people in a certain area have dark skin, then models used there need to be trained with enough images of people with dark skin to ensure that the model works for that 10 percent, which may be a much higher ratio.
Raji also talks during the podcast about how the results of the follow-up testing shows the need for targeted pressure to force companies to address the gaps in their demographic performance. The limits of auditing are also explored in the conversation.
The need to have information specific to implementations is discussed in the context of facial recognition for law enforcement uses, and suggests it should be taken off the market in the absence of that information.
Raji says that as some facial recognition systems have reduced or practically eliminated demographic disparities and other accuracy issues, the problem of its weaponization has become more pressing. She notes that people are careful with their fingerprint data much more than facial images. In addition to misuse by law enforcement, sometimes out of ignorance about the technology and sometimes deliberate, Raji says the weaponization of the technology in deployments like the Atlantic Plaza Towers in Brooklyn.
The bias issue exposes the complexity of the issue, and the myth that facial recognition is like magic, Raji suggests. While the necessary conversations are held, the technology should not be used, according to Raji. To make it safe, Raji suggests that technical standards like those supplied by NIST need to be supplemented with others that include considerations of ethics like those produced or discussed by ISO, IEEE, and the WEF.
Though Raji presents the problems she is concerned with as systematic, she acknowledges the benevolence of some facial recognition algorithms.
“No-one’s threatening your Snapchat filter,” Raji states.