Biometrics governance is a nice-to-have for some police
Following the lead of law enforcement agencies fed up with tiresome lectures about thinking before shooting, some police brass have no time for ethical guidelines when it comes to officers using facial recognition and related technologies.
An article from publisher Tech Monitor might startle even proponents of biometrics used in policing.
It quotes the United Kingdom’s police minister, Kit Malthouse, saying that helping officers understand the out-sized impact biometrics can have on the people they serve could “stifle innovation” in surveillance and identification technology.
Member of Parliament Malthouse, according to Tech Monitor, told a House of Lords committee that there is more to lose in terms of fighting crime than there is to gain by, for example, creating a governing body to consider ethical-use guidelines.
Frameworks, he explained, are “generally for more-mature technology.”
In fact, Malthouse suggested Parliament itself can create, judge and manage the ethical behavior of police agencies using AI systems to prevent crime.
A possible flaw in that argument might be that agencies across the nation are buying and deploying systems now without coordination much less best practices.
(Not everyone agrees in national UK politics. Ethics are being debated.)
The same is true regionally in the United States.
Any talk on the national level about ethics guidelines for law enforcement are just that. Clearview AI, a face biometrics vendor that courts condemnation for its “publicly visible content has no privacy protections” business model, just signed the Federal Bureau of Investigation to a one-year subscription.
A joint investigation into how facial recognition systems are being used by the Pulitzer Center and South Florida Sun Sentinel reveals resistance toward ethics policies on the part of police officials.
Palm Beach and Broward counties in Florida perform more face scans than almost all sheriffs’ offices in the state, according to the resulting article, and the county police departments refuse to create biometrics governance policies. Palm Beach and Broward reportedly rant 9,000 scans from February 2020 through June 2021.
The system they use is known as ‘FACES’, which was launched 20 years ago.
Criticism fell on Fort Lauderdale, in Broward County, in spring 2020 when police used the FACES facial recognition system to try to identify people peacefully protesting the police murder of George Floyd.
The city’s police department has said it will create ethical-use policies in the aftermath of the incident.
The Broward County Sheriffs’ office’s refusal helps make the case that coordinated policies are needed. County law enforcement officials endorse a freer hand than do those of Broward’s biggest city.
Back in the United Kingdom, the police agency in Nottinghamshire put out a media release about a pilot test of an undefined facial recognition system.
Granted, the document is written for public consumption, but the only policy apparent is increasing the pace at which suspects can be identified, picked up and brought before a judge.
Speed in police work is good, but historically, its pursuit can create an over-reliance on techniques and technology. There might be a practical internal policy, but if the release is accurate, Nottinghamshire appears ready to put people before judges during a pilot project, which typically is a time to shake out bugs and get acquainted with a new technology or product.
AI | biometric identification | biometrics | ethics | facial recognition | police | regulation | surveillance | UK | United States