Group of researchers calls for Amazon to stop selling biometric Rekognition to law enforcement
A group of 26 “concerned researchers” have called for Amazon to stop providing its biometric Rekognition service to law enforcement agencies in a blog post refuting the company’s response to criticism by MIT researchers Joy Buolamwini and University of Toronto researcher Inioluwa Deborah Raji about imbalanced error rates when using – or misusing — the technology to classify the gender of test images.
The group of researchers co-signing the Medium post include recent Turing Award winner Yoshua Bengio, as well as former principal scientist for AWS Anima Anandkumar, researchers from several universities, and representatives of other companies that use or provide facial recognition technology, including Google, Facebook, and Microsoft.
Buolamwini told Phys.org that she did not expect Amazon’s reaction to be “hostile.”
The post defends the value of Buolamwini and Raji’s research, and criticizes responses by Amazon’s Matt Wood and Michael Punke in a pair of blog posts for misrepresenting the work’s technical details. Facial recognition and facial analysis technologies, for instance, have either a direct or indirect relationship between each other, depending on the approach used, according to the post.
“(I)n contrast to Dr. Wood’s claims, bias found in one system is cause for concern in the other, particularly in use cases that could severely impact people’s lives, such as law enforcement applications.”
The researchers point out that the study was done in the context of Rekognition’s use, with a publicly available API, and that the study data has been made public and the research replicated by many companies. They also not that there are “no laws or standards to ensure that Rekognition is used in a manner that does not infringe on civil liberties.”
The distinction between facial recognition and facial analysis is explored in terms of the difference between biometric and non-biometric, with the researchers arguing that machine learning researchers consider recognition as a particular type of analysis, which is to say a biometric analysis, at least in some cases. Regardless of how precisely the two services are related, the researchers say the evidence of bias in one raises legitimate concern about the other. The researchers also argue that the repeated point by Wood and Punke that the technology was not used as intended in Raji and Buolamwini’s study is unimportant, as systems need to be tested in real world conditions, and Washington County Sheriff’s Office (WCSO) – one of the two police forces known to be using Rekognition – has admitted that it does not follow Amazon’s recommendations for using a confidence threshold.
The researchers also take aim at the claim that criticisms of Amazon were made by groups declining to make the training data and testing parameters they used available, and that Amazon has not found disparities in gender classification by ethnicity. The Gender Shades project website makes the data available, subject to licensing agreements, and companies unable to license it, such as IBM and Microsoft, have produced comparable results using the guidelines in the paper. Further, the inadequacy of ethnicity, which is a social construct, for skin-type, which is not, is important, they say.
“Dr. Wood’s lack of recognition of this important distinction while discrediting peer reviewed research that undeniably motivates caution is disconcerting,” the researchers write.
Amazon has expressed support for a national legislative framework for facial recognition, and also recently updated its Rekognition model for improved accuracy. The company also faces a potential struggle with minority shareholders over government use of Rekognition.