Amazon claims to cut facial recognition bias with unlabeled data

Amazon Web Services has unveiled a method to evaluate bias in facial recognition algorithms without using annotated identity labels.
A research paper on the topic, spotted by MarktechPost, describes a way to detect performance disparities that are indicative of bias.
The method, called Semi-supervised Performance Evaluation for Face Recognition, or SPE-FR, reportedly detects disparities despite the fact that it only estimates a model’s performance based on data from different demographic groups.
SPE-FR could make evaluating models for bias “much more practical for creators of face recognition software,” according to the Post article.
“It can be especially useful to companies and agencies prior to system adoption who may otherwise be unable to estimate system performance or detect potential biases because they cannot collect reliable identity annotations for their data,” according to the paper.
In experiments, scientists trained face biometrics models on data in which specific demographic information had been hidden specifically to create bias. Amazon’s machine learning models consistently spotted differential performance variations in these altered datasets.
In fact, SPE-FR outperformed Bayesian calibration.
“SPE-FR can be applied off-the-shelf to a wide range of face embedding models with state-of-the-art designs and trained on different datasets,” reads the paper.
Biometric Services Gateway under scrutiny in UK
Biometric bias is discussed in another report as well, this one focusing on the increased use of the Biometric Services Gateway, mobile fingerprinting hardware and software, by UK police forces and a live facial recognition pilot deployed by South Wales police.
The Gateway was first deployed in the UK in 2018. They allow police officers to scan a print and compare it to police and immigration databases. The latest report by civil rights advocates Racial Justice Network and Yorkshire Resists follows up a previous ‘Stop the Scan’ report from last year.
The document is based on freedom of information responses from 35 police agencies. (Eleven refused to respond, citing lack of time and resources.)
Spotted by The Justice Gap, the report claims that the gateway is not used in an objective way by police. Officers can demand that anyone submit to a biometric scan if they feel a person has committed a crime or is lying about their identity.
The report suggests that Black people are four times more likely to be stopped and biometrically scanned than white people. Asians are twice as likely to be stopped. Men are about 12 times more likely to be stopped and scanned than women.
The unequal proportion of queries could have been exacerbated by biometric algorithms used in South Wales Police’s facial recognition program that was ended in 2020, the advocacy groups argue.
The study points to NIST data from 2019 showing that some algorithms were 10 to 100 times more likely to misidentify a Black or East Asian than a white face. Not all of the algorithms evaluated are in commercial production, however, and others were found to have imperceptible differences in performance between demographics, prompting NIST Biometric Standards and Testing Lead Patrick Grother to urge those implementing facial recognition to be specific in evaluating bias.
Article Topics
accuracy | Amazon | Biometric Services Gateway | biometric-bias | biometrics | biometrics research | criminal ID | facial recognition | fingerprint biometrics | police
Comments