FB pixel

This is what AI needs to become the primary tool for police investigations

This is what AI needs to become the primary tool for police investigations
 

By Jan Lunter, CEO at Innovatrics

Back in 2023, Crime and Policing Minister of the UK, Chris Philp, emphasized, “AI technology is a powerful tool for good, with huge opportunities to advance policing and cut crime. We are committed to making sure police have the systems they need to solve and prevent crimes, bring offenders to justice, and protect the public.”

AI enables law enforcement to reach these goals in several ways. For starters, AI-driven facial recognition systems can match faces captured in surveillance footage with existing databases of suspects or missing people. Second, these tools can identify suspicious behavior, detect anomalies, and track individuals’ movements through surveillance cameras. Third, they are essential in monitoring social media platforms for mentions of criminal activities, threats, or other relevant information.

The heart of the matter is that AI has drawbacks in law enforcement. Let’s take a look at what the downsides are and how to address them.

Why AI remains a work in progress

Have you heard of Robert Julian-Borchak Williams? He was arrested in front of his wife and two young daughters as he returned home from work because of mistaken identification by facial recognition technology.

This case emphasizes how AI tools are prone to errors, particularly due to biases related to skin type and gender. In fact, a study examining facial-analysis software revealed an error rate of 0.8% for light-skinned men but a significantly higher rate of 34.7% for dark-skinned women.

On top of that, many AI algorithms are complex and opaque, which makes it difficult for law enforcement to understand how these solutions arrive at their decision. By promoting transparency, the police force can better ensure accuracy and fairness.

Last but not least, AI systems used in law enforcement may be vulnerable to hacking because they may be interconnected with other networks or devices, which increases the attack surface and provides entry points for hackers to infiltrate the system. Strengthening defenses can prevent cybercriminals from gaining access to sensitive data, thereby safeguarding ongoing investigations, confidential information, and undercover operations.

Mechanisms to test AI accuracy is critical

Unlike biometrics, where identification accuracy can be easily verified, general AI applications often lack straightforward methods for assessing accuracy. Therefore, establishing mechanisms to test and evaluate the accuracy of AI systems is crucial for enhancing their efficiency in police work.

One approach is to independently test AI systems by third-party organizations or experts. These independent testers can evaluate the performance and efficiency of AI algorithms and provide objective insights into their accuracy. The biometric algorithms themselves already undergo such scrutiny via various benchmarks performed by the National Institute of Standards and Technology. They have been established with the very aim of providing in-depth information about the accuracy and error ratios of all relevant biometric algorithms to law enforcement.

This approach should be adopted by the AI applications as well to give anyone an objective measure of the AI system’s capabilities. If you know about an error or bias, you can design a process to address it. Inaccuracy in itself is not a problem – not knowing about it is what’s dangerous.

Another method is to utilize cross-validation techniques. This strategy involves splitting the dataset into multiple subsets, training the AI model on one subset, and testing it on the others to ensure robustness. Added to that, performing thorough error analysis can help identify and understand the types of errors AI solutions make. By analyzing false positives, false negatives, and other errors, the police force can pinpoint areas for improvement and boost the accuracy of AI algorithms.

By implementing these measures, AI doesn’t function as an unsupervised “black box,” simply delivering answers without explanation or accountability. Instead, AI systems can offer transparency regarding their information sources and reasoning processes, enabling fight crime units to comprehend and track their decision-making. And through rigorous testing procedures, it is possible to identify potential security risks while ensuring that AI systems meet established security standards and protocols.

The bottom line is that AI is increasingly proving to be a valuable asset for law enforcement, aiding in identifying suspects and missing persons through facial recognition, analyzing large volumes of digital evidence, enhancing surveillance capabilities, and preventing crime.

However, decision-makers must recognize that AI is still in its evolutionary stages and prone to errors stemming from biases, lack of transparency, and vulnerability to cyberattacks. To tackle these issues and enhance the use of AI in police investigations, creating mechanisms to test AI becomes non-negotiable. This way, public safety officers can promote accountability and transparency.

About the author

Jan Lunter is Co-founder and CEO of Innovatrics, which has been developing and providing fingerprint recognition solutions since 2004. Jan is an author of the algorithm for fingerprint analysis and recognition, which regularly ranks among the top in prestigious comparison tests (NIST PFT II, NIST Minex). In recent years he is also dealing with image processing and the use of neural networks for face recognition.

DISCLAIMER: Biometric Update’s Industry Insights are submitted content. The views expressed in this post are that of the author, and don’t necessarily reflect the views of Biometric Update.

Article Topics

 |   |   |   |   |   | 

Latest Biometrics News

 

Oxford program to study DPI impact on social, financial inclusion

Oxford University’s Blavatnik School of Government has announced the establishment of the Oxford Digital Public Infrastructure Research Lab (OxDPI), an…

 

Idemia makes OEM pitch for biometric modules

A recent webinar from Idemia Public Security looks at how original equipment manufacturers (OEMs) can integrate seamless security into devices…

 

ICE wants biometric monitoring devices for alternative to detention program

US Immigration and Customs Enforcement (ICE) issued a Request for Information (RFI) for biometric monitoring devices as part of its…

 

Biometrics coming to more stadiums with facial recognition tender in NSW

Venues New South Wales (VNSW) has issued a tender for facial recognition systems to be deployed at Stadium Australia (Accor…

 

FinGo supplying vein biometrics to boost gold mining transparency

SMX – a company operating in the so-called circular economy – is collaborating with finger vein biometrics firm FinGo in…

 

Biometric privacy law in Texas close enough to BIPA to protect Match

Just because you live in Illinois and a company has processed your biometrics without getting your informed consent, you may…

Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Most Read This Week

Featured Company

Biometrics Insight, Opinion

Digital ID In-Depth

Biometrics White Papers

Biometrics Events