Amazon finds zero false positives in Rekognition testing and takes issue with bias report
Accusations that Amazon provides inaccurate facial biometric technology and lacks appropriate concern over the ethical and social concerns about facial recognition are not properly grounded in fact, the company says in a blog post responding to a recent research paper and newspaper article. Amazon says it is “acutely aware” of concerns about the technology, and “highly motivated” to improve it on an ongoing basis, AWS General Manager of Artificial Intelligence Dr. Matt Wood writes, in response to controversy generated by a follow-up study on bias in various facial biometric algorithms by Joy Buolamwini.
Amazon’s primary objection to the findings is that facial analysis is being used as a proxy for Amazon Reognition, even though it is a separate service that uses different underlying technology and training data. Wood also notes that the confidence threshold used for the research is not included in the available information, and argues that the use cases for facial analysis include searches which produce all possible matches, while for law enforcement use cases of Rekognition the confidence threshold should be set to 99 percent.
A new version of the Rekognition algorithm was launched in November, which is not the version used by Buolamwini, according to Wood. In internal testing of 12,000 images from six ethnicities, Amazon “found no significant difference in accuracy with respect to gender classification.” The company also matched images from parliamentary websites against the Megaface dataset with Rekognition at a 99 percent confidence level, and found zero false positive matches.
Wood also addresses the issues of transparency and standards.
“Beyond our internal tests or single ‘point in time’ results, we are very interested in working with academics in establishing a series of standardized tests for facial analysis and facial recognition and in working with policy makers on guidance and/or legislation of its use,” Wood writes. “One existing standardized test from the National Institute of Standards and Technology (NIST) allows a simple computer vision model to be tested in isolation. However, Amazon Rekognition uses multiple models and data processing systems under the hood, which cannot be tested in isolation. We welcome the opportunity to work with NIST on improving their tests to allow for more sophisticated systems to be tested objectively, and to establish datasets and benchmarks with the broader academic community.”
Any suggestion or implication that Amazon is not improving its technology is false, according to the blog post, as the current version of Rekognition is the fourth update. Wood also writes that direct offers by Amazon to discuss, update, and collaborate on the tests have not been acknowledged by the researchers.
Potential risks associated with facial recognition are acknowledged in the post, which also says Amazon suspends customers’ use of the service if they are found to be using it irresponsibly or infringing civil rights. The company remains optimistic about the benefits the technology can deliver, however, such as for finding missing children and providing better payment authentication, and points out that no misuses of Rekognition by law enforcement have been reported yet.
“The answer to anxieties over new technology is not to run ‘tests’ inconsistent with how the service is designed to be used, and to amplify the test’s false and misleading conclusions through the news media,” Wood argues. “We are eager to continue to work with researchers, academics, and customers, to continuously improve as we evolve this important technology.”
Issues related to facial recognition were a hot topic at the World Economic Forum in Davos, and Microsoft has been positioning itself as a champion of public good with calls for government regulation.