Law enforcement and civil society grapple with facial recognition policy, transparency
Civil society groups around the world are calling for greater transparency in uses of facial recognition by police, and the way technologies are procured. Lawmakers and regulators are also taking action, though to what effect is uncertain.
YouTube videos on facial recognition in law enforcement appear to be more positive about the technology and police than the vendors supplying the technology, according to research reported in The Conversation.
Two Australian academics have produced a pair of studies on attitudes around the use of facial recognition by police, finding that 61 percent of over 200 YouTube videos reviewed were positive about the police and biometric technologies, though the second study shows that monitoring before a crime has occurred is viewed very differently from forensic applications.
New Zealand academic, India law researchers call for transparency
In ‘Facial Recognition Technology in New Zealand: Towards a legal and ethical framework,’ researchers from the Law Foundation review existing uses and regulations including but not limited to policing and law enforcement, and find that a robust consultative process will be necessary to create appropriate regulatory and oversight mechanisms.
The report finds both a lack of transparency, with corresponding concerns, and major potential for public benefit from facial recognition. It concludes with eight general recommendations, starting with the creation of a new category of personal information for biometrics, and including greater individual control, the establishment of a Biometrics Commissioner for the country, the use of good quality Privacy Impact Assessments, more oversight and enforceability for the Algorithm Charter, more transparency, a code of practice, and reform to information sharing agreements.
The private sector’s involvement in facial recognition use by police in India is set out in a working paper by the VIDH Centre for Legal Policy.
The paper’s troubling conclusion is that private sector needs are playing an inappropriate role in public policy creation.
As in New Zealand, they say transparency needs to be increased, and regulations updated to govern police use of face biometrics.
As MediaNama reports, however, India’s government does not feel moved to regulate artificial intelligence or facial recognition, suggesting that legacy regulations are sufficient, and further technology development is a priority.
Peaceful protest protections
The European Center for Not-for-Profit Law has published a factsheet on ‘Peaceful Assemblies and Facial Recognition Technology: International Standards’, finding that a report from the UN High Commissioner for Human Rights sets the stage for a complete ban on facial recognition use to monitor peaceful protests.
Further regulatory frameworks are needed, according to the factsheet, though the OSCE/ODIHR Guidelines on Freedom of Peaceful Assembly and Council of Europe Guidelines on Facial Recognition reflect UN standards, according to the ECNL.
Worcester joins ban parade
Worcester, Maine is the latest city in the U.S. to block city departments from acquiring or using facial recognition, AP reports.
A city councilor cited by the paper referred to concerns over racial and gender bias, and the ACLU notes that 1.5 million people in neighboring Massachusetts are covered by municipal restrictions, along with state-wide limitations. Portland, Maine has similar restrictions in place.
Data deletion orders leveled and followed
France’s data regulator (CNIL) has ordered Clearview AI to delete the data of all people in France unless covered by some other legal basis within two months, joining a growing chorus of national data protection authorities making similar demands.
Given that both breaches found by CNIL’s investigation are of the Europe-wide GDPR, the ruling would appear to have implications for other countries across the EU.
CEO Hoan Ton-That told Reuters that because Clearview does not operate in France or Europe, it is not subject to GDPR.
Chinese auto-maker Xpeng has deleted 430,000 photos of people taken in its showrooms under the country’s facial recognition laws, Caixin Global reports.
Xpeng was also fined 100,000 yuan (roughly US$15,700), and admitted to being unfamiliar with the law.
Article Topics
biometric identification | biometrics | CNIL | data protection | facial recognition | forensics | GDPR | police | privacy | regulation | video surveillance
Comments