AnyVision proposes three ethical facial recognition principles for police
Britain’s proposed Surveillance Camera Code of Practice should align with three principles which can support the fair and ethical use of facial recognition by police, according to an open letter written to Biometrics and Surveillance Camera Commissioner Professor Fraser Sampson by AnyVision.
The open letter, written under the title ‘Facial Recognition Apps Should Be Provided to the Police with an Empty Database,’ shares the company’s perspective on the use of face biometrics in criminal investigations and security systems. The letter also presents best practices for applications of ethical facial recognition in law enforcement settings.
“The ethical use of facial recognition is a thorny one and requires a nuanced discussion,” observes AnyVision CEO Avi Golan. “Part of that discussion must explain how the underlying facial recognition system works, but, just as important, the discussion must also involve how the technology is being used by police departments and what checks and balances are built into their processes. We welcome an honest and objective dialogue involving all stakeholders to draft fair and balanced regulation.”
The proposed update to the Code of Practice was criticized by former Surveillance Camera Commissioner and current Corsight executive Tony Porter as “lightweight.”
AnyVision suggests that broad adoption of object and facial recognition systems has outpaced fully thought-through due diligence. Biometrics should be deployed only when the need for them is clearly justified and proportionate to the intended purpose, and when appropriately validated, according to the company announcement.
That means biometrics should be deployed with empty databases, adequate safeguards for data and privacy, and improved operational due diligence, the three principles AnyVision identifies. Safeguards include databases created from scratch by the customer organization to meet their specific security needs, and secure cryptography of captured data. AnyVision points out that it also includes a “GDPR-mode” which blurs the faces of people not appearing on the watchlist.
Some deployments of facial recognition in law enforcement have suffered from a lack of due diligence in the past, AnyVision says, with poor investigative processes leading to wrongful arrests that reflect poorly on the technology. The way to prevent these problems is through human review and investigation.
Predictive policing algorithms face predictable criticism
A meeting of the UK’s Justice and Home Affairs Committee, meanwhile, heard from academics from New Zealand, Belgium and America about the application of new technologies, including facial recognition but also predictive algorithms, in policing.
UC Davis School of Law Professor Elizabeth E. Joh told the committee that many police departments began using predictive AI tool during the 2010s, but some have abandoned and even banned them. These steps are part of a pushback Joh says is motivated by both concerns about the technology’s effectiveness and ethics.
Vrije Universiteit Brussels Professor Rosamunde Elise Van Brakel stated that mitigating the risks associated with new technologies requires going beyond data protection, and that considerations of the social impact of technologies should precede investment in their development.
University of Otago Professor Colin Gavaghan noted the potential limits of human review, particularly if it is not applied early enough in the process, as well as a movement towards full-lifecycle auditing and compliance model, which could reveal problems yet to manifest in technologies only revealed prior to deployment.