Can AI predict who will commit crime?

By Professor Fraser Sampson, former UK Biometrics & Surveillance Camera Commissioner
Will AI be able to tell us who is going to commit crimes in the future? Once a purely fictional question, probabilistic policing is now getting factual attention.
Our yearning to know what the future holds is as persistent as our claims to have it cracked. From the Nggàm readers of crab movements in Cameroon to the online fortune tellers of Bangkok, divination remains an enduring feature in many cultures. It’s a fair bet that AI will feed our ancient appetite and reinforce our prophetic confidence.
Predictive AI isn’t clairvoyance; it takes existing verifiable information and makes inferences from it. Machine Learning computes probability from multiple data points and, in areas where there are millions of factors to be crunched – such as the weather- it’s delivering exciting advances. Intelligent forecasting offers huge potential benefits and sectors such as energy are already using Predictive AI to combine historical data and model complex simulations to inform resource allocation.
Simulating the conditions for individual offending is not the same as calculating the likelihood of storms or energy outages. Offending is often situational and is heavily influenced by emotional, psychological and environmental elements (a bit like sport – ever wondered why Predictive AI hasn’t put bookmakers out of business yet?). Sociological factors also play a big part in rehabilitation which, in turn, affects future offending.
Predictive profiling relies on past behaviour being a good indicator of future conduct. Is this a fair assumption? Occupational psychologists say past behaviour is a reliable predictor of future performance – which is why they design job selection around it. Unlike financial instruments which warn against assuming future returns from past rewards, human behaviour does have a perennial quality. Leopards and spots come to mind. However, identifying who’s going to commit future crime means identifying the crime they are going to commit – and that can be unpredictable too. Activities that are currently lawful may be criminalised in the future, while others that were once crimes no longer are. Sexual offences are a good example. In addition, it might seem obvious that, say, people with convictions for dishonesty (theft, burglary, deception etc.) will probably continue to be dishonest. Perhaps, but in 2017, the UK Supreme Court changed the test for dishonesty. A measure used to calculate a suspect’s honesty for 35 years was upended in one day. How confident are we that our potential offender will not only break today’s laws but also those of tomorrow?
Some future crimes will have to wait until the technology needed to commit them has been created; the Rule of Law says we’ll have to wait too. And our justice system can get it wrong (a tendency increasing by almost 20 percent in 2023). Will the AI tell us who will be wrongly convicted in the future?
One mark of criminal effectiveness is the absence of any previous convictions, so we need some other data points to feed into the AI predictor. What should we use? Arrests, acquittals, acquaintances? Stop and search history? Maybe genealogy and appearance? Nineteenth century scientists believed they could predict criminal disposition based on facial features, a claim that some have also made about AI. Is that intelligent forecasting? It feels like Minority Report with the emphasis historically on the former.
Sometimes all this can be question of chance. Take a footballing example: a teenager recklessly hoofs a football while playing near parked cars. Under the law of England and Wales, if the ball scratches a car they commit criminal damage; if it misses, no crime. Their behaviour was the same either way. Let’s say the ball does hit a car causing a significant dent. The next day the car owner (victim) recognises the teenager (offender) and takes hold of them in a proper citizen’s arrest. The teenager is prosecuted but acquitted, meaning there was no crime from the outset, making the citizen’s arrest retrospectively unlawful, and the car owner (offender) liable for the offence of battery of a minor (victim). Who saw that coming? No wonder Paul Gascoigne said he will never make predictions.
The principal question here for policing is how much of this is useful?
Even if the data could reliably tell us who will be charged with, prosecuted for and convicted of which specific offence in the future, what should the police do about it now? Implant a biometric chip and have them under perpetual surveillance to stop them doing what they probably didn’t know they were going to do? Fine or imprison them? (how much, for how long?). What standard of proof will the AI apply to its predictions? Beyond a reasonable doubt? How will we measure the accuracy of the process? Having made the prediction, the police must do something. Without preventive intervention, they would be in the absurd position of needing the future offence to be committed to prove the predictive accuracy of their algorithm. If the individual doesn’t subsequently offend after timely intervention, the process is vindicated.
This is not to say predictive tools have no place in policing. Policing can make some powerful predictive applications from existing data. Vulnerability is a burgeoning and resource intensive area – pre-empting it offers considerable benefits. Resource modelling to protect our critical infrastructure is another compelling use case, along with traffic management and roads policing. Internal examples might include tackling officer and staff welfare issues, improving shift patterns and deployment.
Public confidence is more likely to come from using technology to improve how we deal with offenders and offending now rather than speculative dabbling in the actuarial.
Malcolm Gladwell said that a prediction in a field where no prediction is possible is just prejudice. Even if it comes with mathematical precision, do we want more prejudice in our criminal justice system?
About the author
Fraser Sampson, former UK Biometrics & Surveillance Camera Commissioner, is Professor of Governance and National Security at CENTRIC (Centre for Excellence in Terrorism, Resilience, Intelligence & Organised Crime Research) and a non-executive director at Facewatch.
Article Topics
AI | biometric identification | biometrics | criminal ID | ethics | Fraser Sampson | law enforcement | responsible AI
Comments