BBC addresses facial recognition bias issues in new Click episode
A new episode of BBC Click recently discussed the issues caused by facial recognition biases, and how they affect people on a daily basis.
Conducted by Spencer Kelly and Lara Lewington, the program opened with the acknowledgment of racial issues still present in today’s society, and how discrimination can (unwittingly) be embedded into the code of next-gen technologies, including biometrics.
For example, the UK’s Home Office website has had some difficulties processing photos from ethnic minorities in the past, prompting the user to upload a new photo, despite the original one being, in fact, correctly formatted.
The BBC has been exploring the issue further, with Craig Langran talking to a former UberEats driver who lost his job after the company introduced facial recognition to verify drivers’ profiles against their IDs.
The driver, who belonged to an ethnic minority, had his account blocked as UberEats claimed the file he had submitted was a photo of a photo, or biometric spoof, but that was reportedly not the case.
The episode was indicative of a wider trend spotted by the BBC of UberEats drivers from ethnic minorities losing their job because of facial recognition discrimination biases.
The second part of the program then analyzes the source of these biases, which according to Deborah Raji, Machine Learning Engineer at the Mozilla Foundation, lie in the datasets selected to train these systems.
“Even our datasets predicting a wedding dress will show the western, Christian wedding as the prediction for that because they are very heavily influenced by western media,” Raji explained on the program.
“As a result of that, western media is not very diverse and does not feature people of color in a lot of TV shows,” she added.
According to the BBC experts, generative adversarial networks (GAN) are at the core of mitigating facial recognition biases.
To corroborate this theory, Langran examined the work of Generated Photos, a company focusing on collecting thousands of volunteers’ faces with the specific goal of training facial recognition algorithms with diverse datasets to tackle facial recognition biases.
The BBC journalist also spoke with Sharon Zhou on the program, a Stanford Scholar and GAN expert, who confirmed the value of generative adversarial networks in helping improve face biometric systems.
Onfido vice president discusses facial recognition biases
Facial recognition biases were also recently discussed by Mohan Mahadevan, VP of research for Onfido.
Talking to Karen Roby of TechRepublic, Mahadevan echoed the BBC’s analysis in relation to data sets, which the VP defined as “incomplete” and therefore biased.
“And then, when we build algorithms, we then exacerbate that problem by adding more bias into the situation,” Mahadevan explained.
The Onfido executive also highlighted how these systems do not operate in a vacuum. Even if the data sets were as comprehensive as possible, and the algorithms programmed to avoid biases as much as possible, once deployed, these systems would change and evolve.
“The real-world data is always going to drift and move and vary,” Mahadevan pointed out. “So, you have to pay close attention to monitor these systems when they’re deployed in the real world, to see that they remain minimally biased.
“And you have to take corrective actions as well, to correct for this bias as it happens in the real world,” he added.
Onfido experienced substantial growth in the first quarter of 2021, with the company announcing last week a 93 percent revenue increase on a year-over-year basis.
The company has also recently unveiled a new partnership with multinational telecommunications giant Orange.
Article Topics
accuracy | algorithms | BBC | biometric-bias | biometrics | biometrics research | dataset | facial recognition | Onfido | training
Comments