AI Now Institute calls for regulation of facial recognition and compares affect recognition to phrenology
Days after the latest call from Microsoft for regulation of facial recognition, The AI Now Institute has published its third annual report, and made 10 recommendations for governments, researchers, and industry practitioners, including regulation for artificial intelligence in general, and “stringent regulation to protect the public interest” applied to facial recognition and affect recognition in particular.
The AI Now Institute, based out of New York University, calls 2018 “a dramatic year in AI.” In a post announcing its report, the organization notes Facebook’s potential incitement of ethnic cleansing in Myanmar, Cambridge Analytica’s election manipulation attempts, Google’s secret censored search engine for the Chinese market, Microsoft’s contracts with U.S. Immigration and Customs Enforcement (ICE), and worker protests at Amazon as major events causing negative headlines related to AI. The scandals all relate to accountability, according to the Institute, and motivated its 10 recommendations.
U.S. lawmakers and Singapore’s Advisory Council on the Ethical Use of AI and Data have recently called for further consideration of AI governance.
The 62-page AI Now Report 2018 (PDF) recommends sector-specific regulation of AI, and regulation of facial recognition that goes beyond notice to set a high threshold for consent. Further, the group slams the use of facial recognition and AI to act on sentiment analysis and other inferences about individuals.
“Affect recognition is a subclass of facial recognition that claims to detect things such as personality, inner feelings, mental health, and “worker engagement” based on images or video of faces,” according to the post. “These claims are not backed by robust scientific evidence, and are being applied in unethical and irresponsible ways that often recall the pseudosciences of phrenology and physiognomy. Linking affect recognition to hiring, access to insurance, education, and policing creates deeply concerning risks, at both an individual and societal level.”
Amazon was recently awarded a patent for using Alexa’s voice recognition to identify user emotions.
The ten recommendations also include the development of new governance approaches, including internal accountability structures, waiving trade secrets or other legal claims that prevent public accountability by contributing to “black box effect,” increased protections for conscientious objectors and whistleblowers, and the application of consumer “truth in advertising” laws to AI. Workplace discrimination issues, detailed accounts of the full stack supply chain, increased support for civic participation, and the expansion of academic AI training to include social and humanistic disciplines are also addressed in the recommendations.
The report also criticizes the rampant testing of AI systems “in the wild,” and explored the idea of “algorithmic fairness,” ultimately suggesting that the perspectives and expertise of both those in technical fields and those in other communities must be engaged to successfully address the issue.