Police use of AI ‘outrageous and unforgivable privacy invasion’ – say the police

By Professor Fraser Sampson, former UK Biometrics & Surveillance Camera Commissioner
Condemnation of police forces deploying ‘opaque and untested’ surveillance tools is nothing new. A cursory online rummage will reveal almost daily coverage of public concerns about AI-enabled technology like facial recognition being piloted by police forces. But last week’s challenge to the latest covert deployment of new technology came from within policing itself.
Damning comments from the Police Federation of England and Wales followed the revelation that the Metropolitan Police Service has been using covert AI-powered technology to monitor its officers’ movements, communications and data access. Almost 600 cases have reportedly been highlighted – 42 of them involving senior ranks. After what the General Secretary of the staff association called ‘an outrageous and unforgivable invasion of privacy’, 100 officers are now under investigation for gross misconduct with another 30 being ‘flagged for suspicious behaviour’.
Police use of AI-enabled technology has two aspects. The first is law enforcement and other operational functions – the bit that gets the headlines. The second is tackling the more mundane administrative tasks shared by all big organisations with workforce, estate, logistics and finance issues. But, given their investigative powers and duties, where are the boundaries for the use of covert internal monitoring to catch rule breakers? Is this ‘policing’? Intelligence gathering is a key police function and the company supplying the software – Palantir– is named after the magical stones used in Tolkien’s Lord of the Rings trilogy to gather intelligence so perhaps there’s a clue. Either way, the covert internal deployment of AI-enabled technology in the police workplace matters for a few reasons.
First, the wider security case for using new technology in this way is compelling. The public expect high standards of conduct from the police and abuses to be rooted out. The same is true of other vital public services but the argument extends to many privately operated entities delivering critical functions. Conflicts in Ukraine and the Gulf are illustrating the fragility of our critical infrastructure. Mitigating threats to the water, transportation, food, energy, finance and communications sectors might equally justify the use of internal biometric surveillance.
Second, AI is in a perpetual beta state. Once you’ve bought the kit you will discover that it can do other things equally well – if it was procured with public funds, you’re duty bound to maximise its value for money. With AI, function creep comes as standard. Against a backdrop of relentless spending pressures and resource challenges, it’s not a hypothetical question to wonder when police forces will be using their facial recognition technology to flag employees suspected of pulling a sicky or interpreting working from home too liberally.
Third, there are convincing efficiency arguments for all organisations to use AI-powered tools in the workplace to monitor compliance with policy. Once they start to turbocharge their processing of employee records with AI, employers’ data will be invaluable in the investigation of crime or gathering of intelligence – will they share it when the police come and ask for it?
And fourth, if it’s OK for staff efficiency, the case for installing biometric technology in safety-critical functions like biometric tachographs becomes irresistible.
Transferring the bots to HR was inevitable – they might be good at recognising wanted people on the street, but their game changing strength is in combing and combining diffuse datasets. When you need iterative crunching of vast amounts of live data, bots are your MVPs in both senses of the term. Weigh the cost of humans checking compliance across multiple layers of organisational policy versus automation of highly transactional processes and handing it to AI becomes a no brainer – which is why the same US company is helping the Ukrainian army process real time intelligence data to improve the efficiency of its strikes against Russian forces.
It’s interesting to see the police on the other side of this technology argument – hopefully no one will reach for the weary “if you’ve done nothing wrong you needn’t worry” platitude but I wouldn’t bet on it. Thus far, the revelations only extend to the Metropolitan Police and it remains to be seen how many other UK forces will follow.
Employers will rightly want to explore the benefits that AI can bring but all workers – police and non-police alike – need safeguards and assurances. Where and when, by whom and for what purposes can their biometric and related data be accessed? Boilerplate phrases limiting it to auditing ‘compliance with relevant policies’ will probably not be enough to assuage fears of staff and their trade unions. The General Secretary of the Metropolitan Police Federation is asking: ‘where is the transparency… and the reassurance that the correct checks and balances are there?’
I have highlighted that this is what under regulation looks like and the police are now seeing it from the other end of the AI telescope. Once public watchdogs start turning their biometric technology inwards, people’s views of the need for clearer regulation may change. Some are already asking ‘if the police do that to each other, what chance have the rest of us got?’ While we wait to see, picture this: if you were to get a request from your employer to comment on a detailed analysis of your every meeting, journey and call, your hours worked, breaks taken and buildings visited over the last year, would you be in any position to respond? There’s a certain inequality of arms in gainsaying content produced by military-grade software – and if you’re unsure why that should be a concern, ask a UK sub postmaster.
What might this all look like in the future? That’s unclear but the next global pandemic will show us just how far and fast the aggregated surveillance capability of employers and governments has evolved.
About the author
Fraser Sampson, former UK Biometrics & Surveillance Camera Commissioner, is Professor of Governance and National Security at CENTRIC (Centre for Excellence in Terrorism, Resilience, Intelligence & Organised Crime Research) and a non-executive director at Facewatch.
Article Topics
AI | biometric monitoring | biometrics | data privacy | Fraser Sampson | law enforcement | London Metropolitan Police | surveillance






Comments