Should police AI have discretion?

By Professor Fraser Sampson, former UK Biometrics & Surveillance Camera Commissioner
“Discretion is the art of suiting action to circumstance – it is the police officer’s daily task”. So said Lord Scarman in his totemic report to the UK parliament following the Brixton Disorders of April,1981.
The discretion Scarman was referring to was in the context of street searches, what others call stop and frisk (with the racially disproportionate focus being more on the stopping than the frisking). The most intrusive and consequential police powers are discretionary – how could they be anything else? – and the definition applies equally to the exercise of any of them.
When suiting action to circumstance, the variables are almost infinite and one of the great advantages that Artificial Intelligence (AI) gives us is the capability to compute them at a speed and scale unimaginable in 1981. Revealing patterns, positing options and inferring outcomes is one of the phenomenal contributions that AI can bring to policing but how will that affect the use of discretionary powers?
If they’re using AI to carry out ‘neutral’ or administrative tasks, like buying vehicles or modelling staffing ratios, the police are no different from any other public body, but their use of operational powers is a distinct exercise of force by the state. Using AI in this context would raise new issues, perhaps unique to law enforcement. As I have noted before, there’s a big difference between ordering provisions off the shelf and ordering people off the street.
At one level, basic computer-aided decision making is suiting action to circumstance – allocation of response resources to incidents is an example – but that does not involve a discretionary use of power. The key AI question “is this a decision-making tool or a decision support tool?” never arose when the software was only capable of the latter. Now AI is involved, there are some fundamental differences from what has gone before. Artificial Intelligence systems are capable of taking the decision itself, offering solutions beyond their training and improving their performance for the next time.
Given this capability, how much discretionary power should police AI systems be permitted? For those saying ‘none’, consider this: once the police have invested public money in an AI system, aren’t they duty bound to maximise its efficacy, find efficiencies and free up as much resource as budgets will allow for tackling priorities? What if the AI’s decisions could uniquely reduce online harm to children or prevent sexual exploitation? For those who say ‘of course’, how will the parameters of discretion be constrained? Addressing the limitations of AI discretion will be more a question of jurisdiction than function.
In the film Demolition Man, the police lose their discretion and rely on computer-generated options as they approach a situation. When faced with old school villain Simon Phoenix (Wesley Snipes) who has been deep frozen but returns thawed and threatening, this AI solution proves to be useless. Thankfully, this is Hollywood AI and Sly Stallone appears in the nick of time as misunderstood cop John Spartan – also frozen for perpetuity. He returns to his beat defrosted, with all his discretionary instinct intact and the day is saved once again.
We can’t yet rely on cryogenics (or wonderfully apt names), to preserve police discretion in the world of policing AI so we’re going to need a more realistic strategy.
One way AI could help with discretionary police powers is by advising on projected impact and viable alternatives. It could provide an objective assessment for the decision maker to check the proportionality and reasonableness of proposed action with circumstance – things that were central to the Scarman Inquiry.
Another is bias. While there is much anguish about AI bias in policing, AI could reduce human bias, flagging the risk at organisational (if not individual) level before it becomes policy or hardens into culture. Scarman again.
A downside is the potential for deskilling police officers (send for Spartan!). A CENTRIC survey of citizens’ concerns about the police use of AI is showing some interesting early responses here.
Then there’s accountability. How does an aggrieved citizen complain about their treatment when discretionary police power is used (another key Scarman consideration)? If AI is partly responsible for a decision, should it shoulder part of the blame? How? The same survey is currently asking thousands of citizens about concerns that the police might pass the blame to AI if something went wrong. A strong response might be a proxy for public attitudes to the police generally as much as a measure of misgivings about their technology, but it will be relevant either way.
The Scarman Report concerned the organisation and role of the police in society, particularly in relation to racial disadvantage, and there is not room here to do it justice. After the report’s publication, the UK parliament debated how policing must strike the right balance between consent, independence and accountability, a task which it recognised required ‘great discretion on the part of each individual police officer in the diverse society of today’. Balancing and discretion still apply today.
Even if not yet using AI to take active tactical decisions, policing will benefit enormously from the new capabilities it offers. Artificial Intelligence brings substantial benefits to areas such as forensics and biometrics (fingerprint and DNA analysis), surveillance (facial recognition) and crime mapping (geospatial prediction); its discretionary use will have a profound impact on public trust and confidence.
The proper role for AI in discretionary policing must look both ways. Responding to the Scarman Report, one member of the House of Lords said:
“It is a tribute to the police that they can exercise the discretion needed to do their duty…and, let us make no mistake, it is the responsibility of the community to assist the police in their task”.
How will police AI help the citizen suit action to that circumstance?
About the author
Fraser Sampson, former UK Biometrics & Surveillance Camera Commissioner, is Professor of Governance and National Security at CENTRIC (Centre for Excellence in Terrorism, Resilience, Intelligence & Organised Crime Research) and a non-executive director at Facewatch.
Article Topics
AI | biometric-bias | biometrics | facial recognition | forensics | Fraser Sampson | law enforcement | police
Comments