Lawmakers need schooling on face recognition, say subject-matter experts
Governments around the world do not understand how biometric facial recognition works, and that is leading to ill-informed or needlessly harmful laws, says an international group of vendors, activist groups, non-profits and universities focused on the responsible use of artificial intelligence.
The Partnership on AI has issued a white paper describing in fairly approachable language what facial recognition systems are and how they work. It also walks the reader through key software processes and discusses facial characterization, a related but separate software task.
The white paper describes the differences between facial detection, verification and identification, and provides a list of questions for policymakers and other stakeholders can use to find out more about facial recognition systems in an appendix.
The report grew out of workshops the organization held late last year to discuss that state of the art and near-term advances expected of the technology, according to the white paper. The meetings also “provided societal context for the environments where facial recognition technologies are currently being deployed.”
That likely is why facial characterization was broken out as a topic to discuss in the white paper. Characterization is an automated process through which software can “interpret, predict, and categorize the physical appearance of features on a face,” according to the report.
Facial characterization can observe expressions, for example, but the technique does not identify the people observed. This differentiation is going to become central to semi-autonomous and autonomous vehicle makers, whose systems will have to navigate by viewing and, at times, interpreting humans’ faces.
Those invited to the workshops included Partnership on AI members, of which there are more than 100 in 13 countries. Also attending were “communities developing, engaging with, and affected by these systems,” although none were named.
Partnership for AI members are notable. Among them are the American Civil Liberties Union and GLAAD, the MIT Media Lab and Fraunhofer, Amazon.com and Facebook.com, NVIDIA and IBM, the BBC and the New York Times.
DeepMind, the artificial-intelligence software startup now owned by Alphabet Inc., is a member (as is Google, which also is owned by Alphabet). One of its co-founders, Mustafa Suleyman, was a co-founding chair of Partnership for AI.
While with DeepMind, Suleyman co-developed a controversial health app called Streams for the UK’s National Health Service. Streams collected data from patients without their prior consent in 2018, leading to Suleyman’s taking leave from DeepMind. In December, it was announced he was joining Google in an undefined role.