Masks mistaken for duct tape, researchers experiment to reduce human bias in biometrics
Arguing biometric facial recognition can often be biased, some researchers believe human intervention could solve the problem. However, humans can also be biased, believes Georgia Tech College of Computing Ph.D. alumna Samira Samadi, who argues human-subject experiments should be looked into before concluding that human intervention is the silver bullet in solving software limitations.
“Humans are biased themselves, so how can you resolve an issue of bias with a human?” Samadi told the University. “It might even make it worse.”
Curious to understand if a human evaluator would make the process fair or more biased, Samadi recruited users for a human-user study. She taught them about facial recognition systems and how to make decisions about system accuracy.
“We really tried to imitate a real-world scenario, but that actually made it more complicated for the users,” Samadi said.
The experiment confirmed the difficulty in finding an appropriate dataset with ethically sourced images that would not introduce bias into the study. The research was published in a paper called A Human in the Loop is Not Enough: The Need for Human-Subject Experiments in Facial Recognition.
A NIST that analyzed 189 software algorithms developed by 99 companies found that the chances for Asian and African American faces to be inaccurately recognized are in many cases 10 to 100 times higher, though some were found to have “undetectable” demographic differences.
Durham University’s Computer Science Department has also been working on pilots that would reduce bias in facial recognition technology, reports Palatinate, the university’s independent student newspaper. Last month, PhD students Seyma Yucer-Tektas and Samet Akçay, alongside staff members Dr. Noura Al Moubayed and Professor Toby Breckon presented their research which lowered racial bias by one percent and improved ethnicity accuracy. To reach these results, the team used a synthesized dataset with various facial and racial features, and higher focus on identifying features.
AI tools developed by Google, IBM and Microsoft turned out less accurate than expected, writes Ilinca Barsan, Director of Data Science, Wunderman Thompson Data. Google Cloud Vision API, for instance, didn’t recognize the PPE or mask label and misclassified images. Some of the more surprising tags it detected include mouth, duct tape, headgear and costume. “Mask” was identified with 74 percent confidence. IBM Watson Visual Recognition was pretty vague, Barsan writes, while Microsoft Azure Cognitive Services Computer Vision “displayed a more innocuous gender bias.”
“It’s fascinating albeit not surprising to realize that for each of the three services tested, we stumbled upon gender bias when trying to solve what seemed like a fairly simple machine learning problem,” concluded Barsan. “That this happened across all three competitors, despite vastly different tech stacks and missions isn’t surprising precisely because the issue extends beyond just one company or one computer vision model.”
New software to cut down on demographic differences in face biometric performance has also reached the market. The ethnicity-neutral facial recognition API developed by AIH Technology is officially available in the Microsoft Azure Marketplace. In March, the Canadian company joined the Microsoft Partners Network (MPN) and announced the plans for the global launch of its Facial-Recognition-as-a-Service (FRaaS).
“AIH Technology’s mission is to bring positive changes to the real-world problems facing AI applications, including addressing racial bias in facial recognition,” said Ben Su, COO of AIH Technology, in a prepared statement. “With the global reach of the Azure Marketplace and its supportive ecosystem, AIH Technology is well-positioned in making meaningful impacts with our racially inclusive facial recognition algorithm.”