FB pixel

Ring keeps pushing the boundaries for facial recognition in US policing

Ring keeps pushing the boundaries for facial recognition in US policing
 

Watching the New York Police Department’s expansion into facial recognition technology and practices is like witnessing someone who is not an expert build their own fireworks.

It is possible that the process will go as hoped, with no harm, but the opposite is just as possible.

The NYPD has been using facial recognition software to analyze crime scenes and suspected crime scenes for just 11 years. That is a brief span for complex math, random networks of camera hardware and human perception to evolve into a reliable and ethical system, and there is little evidence that that has happened.

Exactly a month after New York City allowed its police to watch historical feeds from Ring cameras, The New York Times has examined what it means to crowdsource law enforcement in the United States’ largest and restless metropolis.

New York police officers do not watch live feeds, only what is offered or made public by owners of Amazon‘s Ring devices. They can, however, run volunteered images through facial recognition applications that the department uses.

In doing so, the article recounts concerns by citizens, and advocates for privacy rights and police reform.

The article exaggerates the potential impact today of Ring owners and posters, citing politicized figures that nonetheless are far less than overwhelming. It describes the 10 million devices that are believed to be active in the U.S. as ubiquity in a nation of 333 million.

That is a point worth making, but the number of U.S. law enforcement agencies participating that Ring executives say are using Ring’s social media app Neighbors is large and growing. Ring has publicized an interactive map of U.S. government agencies on Neighbors, placing red fire department emblems over the far more numerous police emblems.

And there remain a number of credible studies and analyses that indicate that human fallibilities and prejudices already compound a ladder of technical challenges and shortcomings inherent even in non-crowdsourced public facial recognition surveillance.

The latest report, at least in the United States, comes from Georgetown University’s Center on Privacy & Technology. It found that facial recognition “may be particularly prone to errors” created by subjective judgment, bias, manipulated or poor-quality evidence and technology.

The report’s authors based their analysis on “the vast wealth of research and knowledge already present in computer science, psychology, forensic science and legal disciplines.”

The Times article cites an analysis by news publication The M.I.T. Technology Review in 2018 that could not confirm Ring executives’ 2015 claim that, in an experiment they created, crime was dramatically reduced through the use of their hardware.

Despite informed doubts, the Georgetown report finds, at least three police departments have used facial recognition algorithms as probable cause to arrest suspects. And evidence collected from biometric searches are being presented in criminal court without giving the accused the opportunity to challenge it.

Related Posts

Article Topics

 |   |   |   |   |   |   | 

Latest Biometrics News

Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

The ID16.9 Podcast

Most Read This Week

Featured Company

Biometrics Research

Biometrics White Papers

Biometrics Events

Explaining Biometrics