FB pixel

UK citizens say the police should be using AI – but there are conditions…

UK citizens say the police should be using AI – but there are conditions…
 

By Professor Fraser Sampson, former UK Biometrics & Surveillance Camera Commissioner

As we come to terms with its potential reach, debates about the use and abuse of Artificial Intelligence (AI) continue to make headlines. How to adopt and adapt to AI responsibly and accountably is a strategic challenge for every sector – and policing is no exception.

Accountability is about people with power using it properly and where those people are the police, their use of power can have profound consequences. If their power is amplified by technology, accountability extends to its use. Advanced technology comes with advanced accountability. With AI-enabled solutions becoming more affordable and reliable, police chiefs in many countries are united by two practical questions: “What does AI accountability look like?” and “How do we achieve it?”

These questions were at the heart of an ambitious research project conducted by researchers at the Centre for Excellence in Terrorism, Resilience, Intelligence and Organised Crime Research (CENTRIC) – the answers are set out in the AIPAS framework and its recently published national citizen consultation conducted as part of the UKRI-funded project AIPAS (AI Accountability in Policing and Security).

Accountability is closely aligned with lawfulness, but there’s more to it than keeping within the law. Police organisations are emanations of the state and exercise exceptional powers. Operating under the Rule of Law, the police are answerable to the courts for their use of AI tools, but there are other pathways that must be in place in order to map their wider accountability.

The ways in which the police have to answer for their action (and inaction), and the people to whom they are answerable, varies from one country to another. However, the opportunities for policing to use AI-enabled technology are very similar across the world.

Disentangling accountability principles from legal frameworks is a challenge, so too is the asymmetrical speed of evolution in AI. Accountability when using AI requires the balancing of three things: the technological (what can be done), the legal (what must/must not be done) and the societal (what people will support/expect to be done). Looking at police accountability from these three vantage points, researchers recognised that the technology is changing much faster than the law, while public attitudes towards its use by the police have not previously been captured.

Drawing on the collective experience of police officers and staff, and an initial survey of over 6000 people from countries across Europe, North America and beyond, the team identified the indicators of police accountability in using AI-driven technologies.

Because of the overlap in technology/law/society, it is difficult for policing bodies to tell whether their technical specifications satisfy the law. Similarly, nailing the techno-legal issues of AI doesn’t tell you anything about the expectations of the citizen when you use it. To help the police and the citizen understand how AI policies and practices will fit with the expectations or concerns of their communities, researchers sought the views of 10,114 UK citizens. Ranging from 14 to 85+ years old, the majority of respondents (61.3%) across all four nations supported the use of AI in policing, provided it came with strong accountability, oversight, and technological and legal safeguards.

Nearly two thirds (64%) believed that AI enhances police capabilities and the survey showed a recognition of key advantages to AI adoption in policing, especially in the context of national security, police effectiveness in solving crime, and freeing up time and resources to focus on high-priority offending.

But the support wasn’t unconditional and the survey also identified significant concerns about the police using AI, including oversurveillance and the misuse of personal data. Of particular significance to biometric capabilities is that nearly 40% of respondents said they would be unwilling to contribute their personal data for the training of police AI systems (only 9.3% would be very willing) and many were concerned that the police would shift ‘blame’ onto the technology when things go wrong. These findings are highly relevant to the use of AI-enabled systems like facial recognition technology (FRT). The survey also found regional variations in attitudes across devolved areas (Scotland and Northern Ireland) yet public space surveillance regulation remains grounded in data protection, covered by a UK-wide framework.

The survey also revealed that support for police use of AI is heavily contextual, with trust and acceptance tending to be highest where deployment carries low risks for the general public. Again, this is a critical consideration before crossbreeding CCTV systems with biometric capabilities for police purposes and risking a Frankenstein effect. In terms of operational scenarios, the highest level of acceptance was for using AI to identify perpetrators of child sexual exploitation and terrorism, two areas that have been discussed here previously to illustrate ‘perfect use cases’ for police FRT. Lowest levels of support were for the automation of core functions like 999 emergency calls.

Who says the police should use AI? Most of the 10, 114 UK citizens in the CENTRIC survey. But three quarters wanted mandatory accountability processes in place before the police use it – the AIPAS legal analysis shows that’s missing. At the same time, fewer than 20% believed there was enough accountability for police use of AI. Both findings corroborate the need for an accountability framework; both are addressed within AIPAS and the accompanying software tool to help policing bodies audit their policies and practices against its principles.

The effective use of AI in policing is about balancing the technology, the law and the expectations of society. Supreme Court Justice Lord Kerr said “A power on which there are insufficient legal constraints does not become legal simply because those who may have resort to it exercise self-restraint. It is the potential reach of the power rather than its actual use by which its legality must be judged.” The potential reach of AI-enabled biometric power in policing is almost limitless – only clear accountability will provide the constraints needed to make it publicly acceptable.

About the author

Fraser Sampson, former UK Biometrics & Surveillance Camera Commissioner, is Professor of Governance and National Security at CENTRIC (Centre for Excellence in Terrorism, Resilience, Intelligence & Organised Crime Research) and a non-executive director at Facewatch.

Related Posts

Article Topics

 |   |   |   |   | 

Latest Biometrics News

 

Do biometrics hold the key to prison release?

By Professor Fraser Sampson, former UK Biometrics & Surveillance Camera Commissioner In the criminal justice setting there are two questions in…

 

New digital identity verification market report forecasts dramatic change and growth

The latest report from Biometric Update and Goode Intelligence, the 2025 Digital Identity Verification Market Report & Buyers Guide, projects…

 

Live facial recognition vans spread across seven additional UK cities

UK police authorities are expanding their live facial recognition (LFR) surveillance program, which uses cameras on top of vans to…

 

Biometrics ease airport and online journeys, national digital ID expansion

Biometrics advances are culminating in new kinds of experiences for crossing international borders and getting through online age gates in…

 

Agentic AI working groups ask what happens when we ‘give identity the power to act’

The pitch behind agentic AI is that large language models and algorithms can be harnessed to deploy bots on behalf…

 

Nothin’ like a G-Knot: finger vein crypto wallet mixes hard science with soft lines

Let’s be frank: most biometric security hardware is not especially handsome. Facial scanners and fingerprint readers tend to skew toward…

Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Biometric Market Analysis

Most Viewed This Week

Featured Company

Biometrics Insight, Opinion

Digital ID In-Depth

Biometrics White Papers

Biometrics Events