FB pixel

AI Now Institute calls for regulation of facial recognition and compares affect recognition to phrenology

AI Now Institute calls for regulation of facial recognition and compares affect recognition to phrenology
 

Days after the latest call from Microsoft for regulation of facial recognition, The AI Now Institute has published its third annual report, and made 10 recommendations for governments, researchers, and industry practitioners, including regulation for artificial intelligence in general, and “stringent regulation to protect the public interest” applied to facial recognition and affect recognition in particular.

The AI Now Institute, based out of New York University, calls 2018 “a dramatic year in AI.” In a post announcing its report, the organization notes Facebook’s potential incitement of ethnic cleansing in Myanmar, Cambridge Analytica’s election manipulation attempts, Google’s secret censored search engine for the Chinese market, Microsoft’s contracts with U.S. Immigration and Customs Enforcement (ICE), and worker protests at Amazon as major events causing negative headlines related to AI. The scandals all relate to accountability, according to the Institute, and motivated its 10 recommendations.

U.S. lawmakers and Singapore’s Advisory Council on the Ethical Use of AI and Data have recently called for further consideration of AI governance.

The 62-page AI Now Report 2018 (PDF) recommends sector-specific regulation of AI, and regulation of facial recognition that goes beyond notice to set a high threshold for consent. Further, the group slams the use of facial recognition and AI to act on sentiment analysis and other inferences about individuals.

“Affect recognition is a subclass of facial recognition that claims to detect things such as personality, inner feelings, mental health, and “worker engagement” based on images or video of faces,” according to the post. “These claims are not backed by robust scientific evidence, and are being applied in unethical and irresponsible ways that often recall the pseudosciences of phrenology and physiognomy. Linking affect recognition to hiring, access to insurance, education, and policing creates deeply concerning risks, at both an individual and societal level.”

Amazon was recently awarded a patent for using Alexa’s voice recognition to identify user emotions.

The ten recommendations also include the development of new governance approaches, including internal accountability structures, waiving trade secrets or other legal claims that prevent public accountability by contributing to “black box effect,” increased protections for conscientious objectors and whistleblowers, and the application of consumer “truth in advertising” laws to AI. Workplace discrimination issues, detailed accounts of the full stack supply chain, increased support for civic participation, and the expansion of academic AI training to include social and humanistic disciplines are also addressed in the recommendations.

The report also criticizes the rampant testing of AI systems “in the wild,” and explored the idea of “algorithmic fairness,” ultimately suggesting that the perspectives and expertise of both those in technical fields and those in other communities must be engaged to successfully address the issue.

Article Topics

 |   |   |   | 

Latest Biometrics News

 

Canada regulator backs privacy-preserving age assurance

The Office of the Privacy Commissioner of Canada (OPC) has published a policy note and guidance documents pertaining to age…

 

FCC seeks comment on KYC revision for commercial phone calls

The U.S. Federal Communications Commission (FCC) has proposed stronger KYC requirements for voice service providers to prevent scams and illegal…

 

Deepfake detection upgrade for Sumsub highlights continuous self-improvement

Sumsub has launched an upgrade to its deepfake detection product with instant online self-learning updates to address rapidly evolving fraud…

 

Metalenz debuts under-display camera for payment-grade face authentication

Unlocking a smartphone with your face used to require a camera placed in a notch or a punch hole in…

 

UK regulators pan patchwork policy for law enforcement facial recognition

The UK’s two Biometrics Commissioners shared cautionary observations about the use of facial recognition in law enforcement over the weekend…

 

IDV spending to hit $29B by 2030 as DPI projects scale: Juniper Research

Spending on digital identity verification (IDV) technology is projected to reach a 55 percent growth rate between now and 2030,…

Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Biometric Market Analysis and Buyer's Guides

Most Viewed This Week

Featured Company

Biometrics Insight, Opinion

Digital ID In-Depth

Biometrics White Papers

Biometrics Events