SIA blasts misrepresentation of facial recognition studies
Reading articles about facial recognition in consumer publications could lead one to believe the technology is highly inaccurate and biased, based on references to tests and official studies, and the strident opinions of a handful of activists. The Security Industry Association (SIA) has published an article to set the record straight by addressing the most commonly-cited studies and explaining what they really mean.
Face biometrics are in fact highly accurate, and the leading commercial algorithms do not deliver biased results, the SIA states.
“In fact, the evidence most cited by proponents of banning facial recognition technology is either irrelevant, obsolete, nonscientific or misrepresented,” the group says in an article composed by SIA Senior Director of Government Relations Jake Parker and Rank One Computing COO and General Counsel David Ray.
The 2018 Gender Shades study by MIT Media Lab Researcher Joy Buolamwini did not assess facial recognition or biometric matching algorithms at all, but rather demographic-labeling facial analysis algorithms, the SIA points out. The results are also based on algorithms which are now considered obsolete, and were challenged immediately by IBM, but the study is often the main point in support of the contention that face biometrics do not work for women or people with dark skin.
An example of this approach is provided by an editorial appearing in Fast Company, written by a pair of fellows at the NAACP Legal Defense Fund.
The same year, the American Civil Liberties Union (ACLU) published a blog post describing a test in which Amazon Rekognition was used to mismatch members of Congress, with a disproportionate number of false positives for people of color. The SIA says that ACLU’s use of an inappropriate similarity score as a threshold, and silence on the ranking of nonmatching images and whether true positives were returned undermines the ACLU’s conclusions. Further, the SIA says, the ACLU designed and interpreted the test to support a certain position, and Amazon’s re-enactment performed with a higher confidence threshold and a larger database returned zero false matches.
An FBI study from 2012 shows a 5 to 10 percent differential in biometric performance for matching photos of Black people, but the SIA notes that in the intervening time, industry accuracy measurements have changed from errors per thousand candidates to errors per million, showing the study to no longer be relevant.
The U.S. National Institute of Standards and Technology (NIST) looked into the issue in a report published in late-2019. Soon after the report, NIST Biometric Standards and Testing Lead Patrick Grother told Biometric Update that the results were being overgeneralized and misinterpreted by some in the media. SIA makes a similar point, noting the top-tier of facial recognition algorithms (which includes most government and law enforcement suppliers) had “undetectable” demographic differences. Further, it has since been discovered that what differences were found were in significant part due to the algorithm identifying real passport fraud among the samples from Somalia.
Ongoing NIST testing has actually revealed that among the top 20 performing facial recognition algorithms, the worst demographic for performance is still 99.7 to 99.8 percent as accurate as the best, and white males are the demographic with the lowest match rates.
Keep calm and keep on testing
The SIA acknowledges the limitations of any individual method of testing accuracy, and states support for further scientific research into biometric accuracy and consistency. The group cites the use of facial recognition by the Department of Homeland Security to check the identity of 77 million travelers so far at airports and border crossings, and suggests that policy should be geared towards fostering continued improvements in biometric accuracy, and ensuring “only the most accurate technology is used in key applications and that it is used in bounded, appropriate ways that benefit society.”
The group recently made recommendations to the U.S. government for next steps, and has pointed out that most states that have considered bans or extensive restrictions on facial recognition have not enacted them.