UK police not transparent about AI, biometric facial recognition use in investigations, report says
Although they likely use artificial intelligence (AI) and automated decision systems (ADS) in criminal investigations, only a very small number of UK police units are ready to openly talk about it and just as few have spoken to the public about using the technology on them, says a paper released by the RSA.
In collaboration with the RSA’s Tech and Society program, the RSA has used freedom of information requests to find out how law enforcement in the UK uses AI and ADS in criminal investigations, whether the public was informed about the technology, and if staff were trained to use it or provided with some sort of guidelines.
According to the report, AI and ADS are mostly used for predictive policing and biometric facial recognition, yet as there is not enough public engagement, transparency or effective communication, the RSA warns about increasing police powers post-COVID-19 to allegedly contain the public health crisis. “All territorial police forces have access to retrospective facial recognition through the Police National Database (PND), which contains millions of images of police suspects,” reads the paper.
In the UK, South Wales police have been criticized for using biometric facial recognition at soccer games, and has implemented AI to predict crime and to choose prisoners for rehabilitation. The police unit is the only one that confirmed consulting with citizens on how AI is used. The Met Police is considering launching a public engagement program to discuss its tech deployment.
Most UK adults would like the government to regulate and restrict police use of facial recognition, yet nearly 50 percent would agree to the daily use by police if controls are justified, found a report from the Ada Lovelace Institute published last year.
Although a matter of public record, the RSA is “concerned by the relative unwillingness of forces to detail their use of retrospective facial recognition through the freedom of information process.”
According to the Home Office, retrospective facial recognition has been used by UK police since September 2019.
Predictive policing is used to map incident locations and dates to create crime hotspots and to analyze crime patterns. The Royal United Services Institute pointed out in 2019 that predictive models could “skew the decision-making process and create systematic unfairness.”
Not engaging with the public to discuss AI deployment creates an ongoing lack of awareness regarding data use, collection and storage by law enforcement. The RSA did not receive answers from all police units interviewed, as some are still working on their answers five months later. The RSA emphasizes the importance of using AI to optimize police work and not as a cost reduction strategy.
“Confirming or denying the specific circumstances in which the Police Service may or may not deploy the use of facial recognition would lead to an increase of harm to covert investigations and compromise law enforcement,” reads correspondence with RSA by West Mercia Police from January 28, 2020. “…It is well established that police forces use covert tactics and surveillance to gain intelligence in order to counteract criminal behavior. It has been previously documented in the media that many terrorist incidents have been thwarted due to intelligence gained by these means.”