FB pixel

AI Now Institute calls for regulation of facial recognition and compares affect recognition to phrenology

AI Now Institute calls for regulation of facial recognition and compares affect recognition to phrenology
 

Days after the latest call from Microsoft for regulation of facial recognition, The AI Now Institute has published its third annual report, and made 10 recommendations for governments, researchers, and industry practitioners, including regulation for artificial intelligence in general, and “stringent regulation to protect the public interest” applied to facial recognition and affect recognition in particular.

The AI Now Institute, based out of New York University, calls 2018 “a dramatic year in AI.” In a post announcing its report, the organization notes Facebook’s potential incitement of ethnic cleansing in Myanmar, Cambridge Analytica’s election manipulation attempts, Google’s secret censored search engine for the Chinese market, Microsoft’s contracts with U.S. Immigration and Customs Enforcement (ICE), and worker protests at Amazon as major events causing negative headlines related to AI. The scandals all relate to accountability, according to the Institute, and motivated its 10 recommendations.

U.S. lawmakers and Singapore’s Advisory Council on the Ethical Use of AI and Data have recently called for further consideration of AI governance.

The 62-page AI Now Report 2018 (PDF) recommends sector-specific regulation of AI, and regulation of facial recognition that goes beyond notice to set a high threshold for consent. Further, the group slams the use of facial recognition and AI to act on sentiment analysis and other inferences about individuals.

“Affect recognition is a subclass of facial recognition that claims to detect things such as personality, inner feelings, mental health, and “worker engagement” based on images or video of faces,” according to the post. “These claims are not backed by robust scientific evidence, and are being applied in unethical and irresponsible ways that often recall the pseudosciences of phrenology and physiognomy. Linking affect recognition to hiring, access to insurance, education, and policing creates deeply concerning risks, at both an individual and societal level.”

Amazon was recently awarded a patent for using Alexa’s voice recognition to identify user emotions.

The ten recommendations also include the development of new governance approaches, including internal accountability structures, waiving trade secrets or other legal claims that prevent public accountability by contributing to “black box effect,” increased protections for conscientious objectors and whistleblowers, and the application of consumer “truth in advertising” laws to AI. Workplace discrimination issues, detailed accounts of the full stack supply chain, increased support for civic participation, and the expansion of academic AI training to include social and humanistic disciplines are also addressed in the recommendations.

The report also criticizes the rampant testing of AI systems “in the wild,” and explored the idea of “algorithmic fairness,” ultimately suggesting that the perspectives and expertise of both those in technical fields and those in other communities must be engaged to successfully address the issue.

Article Topics

 |   |   |   | 

Latest Biometrics News

 

Biometrics race for the borders

Biometrics to ease border crossings are a major theme of the week among Biometric Update’s most-read articles of the week….

 

US election likely to be a missed opportunity to advance digital ID policy

The 2024 U.S. election represents an opportunity for social dialogue around digital identity policy in the wake of a series…

 

India to pilot Digi Yatra for foreign nationals in 2025

India is planning an international pilot project for June 2025 that will see the introduction of facial recognition technology beyond…

 

Papua New Guinea advances digital ID, wallet and govt platform to pilot

Papua New Guinea has stood up a new digital ID, wallet and online government platform, and plans to pilot them…

 

UK police organized crime unit seeks new facial recognition software

The UK’s main law enforcement agency against organized crime is looking into new facial recognition solutions, as the country doubles…

 

The EUDI Wallet was not meant for age assurance: AVPA

The European Union should not look at the EU Digital Identity (EUDI) Wallet as an age-assurance solution to keep minors…

Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Most Read This Week

Featured Company

Biometrics Insight, Opinion

Digital ID In-Depth

Biometrics White Papers

Biometrics Events