Policing deepfakes: does the camera ever lie?

By Professor Fraser Sampson, former UK Biometrics & Surveillance Camera Commissioner
There’s a well-known saying across the English-speaking world that ‘the camera never lies’. It isn’t meant literally and, even in its early days, photography wasn’t a direct reflection of reality. Many features of a photograph, like the choice of subject, aspect, timing, lighting and other variables, have been in the subjective hands of the photographer. Processing and printing allow intervention between the moment and the record of its capture; some artists even physically alter the image during the chemical development process.
The expression implies that the image’s existence is undeniable proof of the event and has less to do with trust and more about the fact that old-fangled cameras were mechanical devices. The box brownie was incapable of lying, ergo whatever it was pointed at and its photosensitive film exposed to, must (however artistically presented) represent a basic truth. With digital imagery all that changed.
The surveillance device of the first era was a passive tool of capture in the hands of an active operator. The ability to manipulate pictures, with options to insert, remove and edit the entirety of an image, moved us beyond ‘cameras’. In the second era – the one we are just leaving – the box with chemical coated film became a computer capable of many functions, a basic one being to record images. Able to follow instructions and carry out complex functions, using edge computing and cloud storage to allow faster processing of metadata, the digital surveillance device replaced Old World primates like CCTV and opened the gates for AI-enabled remote biometrics.
If they’re to be used as evidence, digital images and records need the same established level of integrity and verification as any other forms, and the internet, social media and cyber-enabled criminality brought significant new challenges to criminal investigations. Early challenges for the criminal justice system arose in the reliability and fairness of using material from social media accounts, the weight given to self-serving posts and unauthenticated video feeds. But nothing so far has brought the change and challenge of deepfakes.
Imagine you are a juror sitting on a prosecution for a murder alleged to have happened during a birthday celebration. Images from multiple devices clearly show the defendant at a party in a house around the time of the killing, supported by witness statements from others who also appear in the pictures. The images are corroborated by social media posts, along with a receipt for local fuel purchase made by the defendant and further time-and-date-stamped shots of him with his vehicle on a forecourt security camera which are consistent with records obtained from the police Automated Number Plate Recognition (ANPR) system.
Nothing in this scenario would be exceptional – unless, that is, the digital evidence has been submitted, not by the prosecution, but by the defendant as his deepfaked alibi to show he was at a party several hundred miles away from the scene at the time. How would the prosecution show that all the images were Deepfakes and the records digital frauds, particularly where the police ANPR reads support them?
The prosecution may also have images, social media posts and digital data contradicting what the suspect and his witnesses are saying but how are jurors to know which ones are the most reliable? Using inherent vulnerabilities in legacy systems like ANPR (in this case, simply by getting someone else to drive around in a similar vehicle bearing the same number plate) to corroborate Deepfakes – and vice versa – will make it challenging to undermine them. If the police are relying on images as evidence, it is vital to be able to prove their provenance and leave no reasonable doubt, whereas someone providing an alibi only has to show theirs is likely to be true. And that’s what Deepfakes are very good at.
Image synthesis has potent appeal. As entertainment, the ingenuity of deepfakery has been contagious and overtly faked photos of celebrities (‘Pope in a puffer’, Donald Trump wrestling with New York’s finest) have become legendary. Deepfakes are irresistible to fraudsters and the UK government has said their sheer scale, combined with their greater sophistication and convincingness, makes Deepfakes “arguably the greatest challenge of the online age”.
The juror scenario is speculative but, as courts begin to see crude examples of deepfaked evidence, policing should prepare for the next era of digital surveillance in which fraudulent alibis will feature somewhere.
Deepfakes are becoming harder to detect and easier to access; even seasoned journalists are getting hoodwinked. With policing relying increasingly on images shared by the citizen, Deepfakes bring a broader risk and some faked calls and videos will be shared with the police as evidence of crime or a need for response.
Deepfakes can be ordered like any other online transaction and the more that defendants try to evade justice by using deepfakery, the more resource the police will need to counter it. How? The answer seems to be more technology.
Accepting the output from a smart device as unadulterated truth is technologically naïve; with AI “inundating the internet, crippling our political structures, and undermining reality”, any traditional veracity of the camera has long been extinct. Public awareness is growing and the response is producing fact checking sites and open competitions trying to keep up. We’re going to need help from AI and should enter this phase with caution – there’s something disconcerting about co-labs creating new poisons then selling us the antidote.
However endlessly adaptable its progeny, the device of the second surveillance era was a tool in the hands of an active operator. We’re still the ones doing the lying. In the next era the device will morph from interactive instrument to autonomous creator. Without need of a human operator, the agentic AI surveillance entity will be capable of deception of its own volition. The next gen surveillance ‘camera’ will be able to lie to us, and we haven’t begun to grapple with that reality.
About the author
Fraser Sampson, former UK Biometrics & Surveillance Camera Commissioner, is Professor of Governance and National Security at CENTRIC (Centre for Excellence in Terrorism, Resilience, Intelligence & Organised Crime Research) and a non-executive director at Facewatch.
Article Topics
biometrics | deepfakes | Fraser Sampson | generative AI | synthetic data | video surveillance







Comments