A New Year’s resolution for AI – don’t blame the bot

By Professor Fraser Sampson, former UK Biometrics & Surveillance Camera Commissioner
According to the old saying, blaming our tools is a sign of a poor worker. And although AI is nothing like any of our previous tools, evidence from the criminal justice system suggests that we’re going to be blaming for a while yet.
Condemnation is a lazy response to complexity and it’s almost reflexive when we don’t understand the thing that’s causing us problems. Witch hunting is both a term and a tactic still wielded after public failure. Evolving from an age-old tradition of persecuting someone (usually poor, often older women) for everything from failed crops to physical ailments, searching for surrogate sinning is part of human heritage – even our naming of severe weather events feels like it has its roots here somewhere.
Now that we have already decided our new tools can get things wrong all by themselves, we are entering a new era of tech-blaming. For the first time the tools can be guilty without any involvement of the user – we’ve even come up with a handy verdict to absolve us both when they do. ‘Hallucinations’ are increasingly identified as the cause of artificial intelligence (AI) malfunction and I suspect we’re going to hear the H-word a lot. When that happens we should pause and recall a much older term for shifting our collective sins onto some hapless creation. Self-directing technology is a god send for tool blamers and we would do well to keep watch for the herding of AI scapegoats.
The police, courts, probation service and prisons need to use AI as extensively as any other sector and they can learn from the commercial sector who are further down the road. Business leaders have found that the ‘hard part’ of introducing new technology is getting customers to trust it: in a criminal justice setting this will be a harder sell. When you’re trying to inculcate trust and confidence in police technology, appearances matter and we know how early experimentation with live facial recognition capability set policing back years. They’re recovering that ground and people in the UK generally support the lawful use of AI by the police, but public attitude surveys show that they have some widely shared concerns. One worry is that the police will try to blame the technology when things go awry. Against this background, the West Midlands Police decision to ban away fans from a Maccabi Tel Aviv football fixture last year is interesting. In arriving at their controversial decision, the police reportedly relied on “internet scraping” of open-source intelligence (OSINT) which threw up details of a previous football match between the Tel Aviv team and another English club (West Ham), quoting the date and even the final score. But no such fixture ever took place. That this ‘never-happened’ event found its way into the evidence presented to parliament is professionally embarrassing for the police but the particular risk here isn’t whether they passed the buck to the bot; it’s that it might look as though they tried. This is precisely what surveyed respondents fear.
When we use AI in criminal justice, we will also need to be very careful, not only about obvious things like reliability and bias, but also a few basic concepts that also affect trust and confidence. For example, when we build or buy the system, we must own its design flaws and shortcomings despite any functional ‘autonomy’ it may have. To cite a pre-AI case, if you build a police database without bulk deletion capability, you cannot later rely on that omission as a defence to illegally retaining of millions of images on it – doing so sounds disingenuous. Accountability for police AI begins upstream, at the pre-procurement stage and the permanent beta state of AI tools will bring a new challenge. Proliferation and self-augmentation are part of AI’s potential; what we came to recognise – and be rightly suspicious of – as ‘function creep’ in legacy technology projects will be legitimately intrinsic. Adjusting to that – in law enforcement and elsewhere – will take a gyroscopic balancing of the possible, the permissible and the acceptable.
In the justice system even entry level reliance on AI for ‘administrative support’ will call for vigilance. Using AI to support the drafting of public records like court judgments will become commonplace and an ongoing Scottish employment law case testifies to how quickly a large language model (LLM) tool can find itself in the dock for any inaccuracies.
The public availability of AI tools is also changing established ways of doing things. The world’s largest accounting body has just scrapped online exams because it’s simply too hard to detect AI cheating which has replaced the excuse of “the dog ate my homework” with “the bot did my homework”. In a criminal justice setting this presents significant challenges to the presentation and verification of evidence which already involves a measure of finger pointing.
Whatever the technological advances, some of the human imperatives and imperfections at work don’t appear to have changed: fear, shame or just plain old vanity will still drive projection and deflection of blame in public projects – when they do, the AI tool offers up an almost irresistible stooge.
The criminal justice system is no stranger to apportioning blame where it doesn’t belong – one of the great contributions of biometrics such as DNA profiling has been its significant reduction of opportunities for wrong outcomes. But as we hand off some of the decision making for key public services like law enforcement to AI, the old-fashioned pillory may make a comeback and the robot will find itself in the stocks.
So, as we reflect on digital trust and ponder the outlook for an already turbulent 2026, I’d propose that law enforcement make and keep one resolution: whatever else you do with biometric technology this year, don’t blame the bot.
About the author
Fraser Sampson, former UK Biometrics & Surveillance Camera Commissioner, is Professor of Governance and National Security at CENTRIC (Centre for Excellence in Terrorism, Resilience, Intelligence & Organised Crime Research) and a non-executive director at Facewatch.






Comments