The use of live FRT by British police makes the UK an outlier among democratic states
By Professor Karen Yeung, Interdisciplinary Professorial Fellow in Law, Ethics and Informatics Birmingham Law School and School of Computer Science, University of Birmingham
At the end of January, the House of Lords Justice and Home Affairs Committee published a letter to the Home Secretary, warning against the use of live facial recognition technology (LFR) by police in England and Wales, rightly warning that it lacks a clear legal foundation, yet is being deployed and governed in troubling ways.
The use of facial biometrics in public spaces by police and law enforcement is controversial. Advocates say it will enable will law enforcement to catch terrorists and those wanted for serious crimes, as well as finding missing children. In short, they argue it will make our streets safer. But will it? Not only is there little demonstrable evidence of its effectiveness, let alone good value for money, there are also important legal and political questions to answer before we should accept a scaling up of LFR for law enforcement in public spaces.
These systems are, by definition, not error-free. Instead, they generate probabilistic automated matches between the face of a person whose image captured in a live video feed against an image stored in the system’s “watchlist.” Yet when created as commercial software, those using the technology (including the police) have no access to the underlying data on which the system was trained, preventing them from evaluating the quality, representative, provenance or legality of the underlying training data.
Even more worrying is the lack of effective oversight or transparency in police-created FRT watchlists. Although the London MPS claims that only those wanted for “serious crimes” have their image uploaded to these watchlists, this is at odds with their reports that these systems were used to apprehend individuals wanted for drug possession offences and traffic violations. Nor is there effective and transparent governance over police officer intervention following an automated “match” alert. We have already seen individuals reprimanded for camera avoidance behaviour during live FRT trials, even though they are merely exercising their legal right to privacy.
Civil rights campaigners have successfully mounted a legal challenge against the use of FRT by police; famously, in the case of Bridges, leading to the judgement that the South Wales Police’s deployment failed to comply with anti-discrimination laws. Yet the Court also indicated that use of live FRT by the police was not unlawful per se, enabling the South Wales Police and the London Metropolitan Police to continue using the technology, now deploying it on a permanent basis.
A democratic outlier
Ironically, the UK, despite hosting the inaugural AI Safety Summit in Bletchley Park last year, is an outlier in the speed and enthusiasm with which it is adopting facial recognition for law enforcement, without any clear legal framework or safeguards, and in the absence of public consultation and consent.
In the EU, in contrast, debate about the deployment of facial recognition has taken place amidst negotiations about the provisions of the EU’s AI Act, which was officially approved recently. Under the Act, the use of ‘real time’ remote biometric identification systems in public spaces by police or law enforcement is prohibited, except for targeted searches for specific victims of abduction, trafficking of humans and search for missing persons; the prevention of a specific, substantial, and imminent threat to life or safety, or; to locate or identify a person suspected of committing a serious criminal offence punishable by imprisonment for a maximum period of at least four years.
To deploy it, the law enforcement authority must obtain prior authorization by a judicial authority, which will only be granted if it its proposed use is ‘necessary and proportionate’ to the achievement of one of the specified objectives. In addition, a fundamental rights impact assessment must be undertaken in advance, the system must be registered in an EU-wide database, the national data protection authority and relevant national regulator must be notified, and the authority must submit annual reports on their use of the systems. They must also comply with the necessary and proportionate safeguards imposed by national legislation authorizing its use, especially concerning temporal, geographic and personal limitations.
The EU’s new law allows for strict and very limited use, and establishes multiple institutional safeguards to enforce those limits. These measures reflect a proper understanding that these AI-powered applications are anti-democratic by nature, by enabling mass, remote surveillance of people going about their lawful business.
Against that backdrop, the UK is an outlier amongst democratic states in forging ahead with deployment of live FRT in public spaces. There have been repeated and urgent calls for a clear legislative framework and democratic debate, to identify whether the British community wish to allow this, and if so, under what circumstances and with what safeguards. None of this has taken place and it is long overdue.
In its absence, the British police are expanding their FRT “watchlists.” Hundreds of thousands of people going about their lawful business in public are being treated, in effect, as ‘suspicious’ and their faces subject to automatic surveillance, prompting the police to stop them and ask them to establish their identity, without identifying the lawful basis upon which they do so.
Such a practice is deeply at odds with the basic tenets of British democracy, in which every person is presumed innocent until proven guilty. We are all entitled to go about our lawful business in public without intervention by police or other coercive powers of the state.
Even for those who think that the democratic cost is a price worth paying, the value for money of this technology has not been established. For example, In London the MPS ramped up its use of LFR throughout 2022, scanning roughly 144,366 people’s biometric information over six deployments between January and July. This resulted in just eight arrests, including for drug possession, assault of an emergency worker, failure to appear in court, and an undisclosed traffic violation.
British democracy deserves better.
About the author
Karen Yeung is a Interdisciplinary Professorial Fellow in Law, Ethics and Informatics at Birmingham Law School. She joined the University of Birmingham’s School of Computer Science in January 2018. Her research has been at the forefront of understanding the challenges associated with the regulation and governance of emerging technologies. Over the course of more than 25 years, she has developed unique expertise in the regulation and governance of, and through, new and emerging technologies. Her on-going work focusing on the legal, ethical, social and democratic implications of a suite of technologies associated with automation and the ‘computational turn’, including big data analytics, artificial intelligence (including various forms of machine learning), distributed ledger technologies (including blockchain) and robotics.
DISCLAIMER: Biometric Update’s Industry Insights are submitted content. The views expressed in this post are that of the author, and don’t necessarily reflect the views of Biometric Update.