NYPD’s new facial recognition policy sparks immediate calls for a ban
The New York City Police Department’s (NYPD’s) new policy for use of biometric facial recognition technology immediately fueled a storm of protest and calls for an outright ban.
NYPD said while it has “had an existent policy in place, with rapidly evolving technology, it was important to update it, memorialize the rules surrounding its use, and remind all members of the department of those rules.”
“When used in combination with human analysis and additional investigative steps, facial recognition technology is an important tool in solving crime, increasing public safety, and bringing justice for victims,” NYPD said, emphasizing that the department “has never arrested anyone on the basis of a facial recognition match alone – it is merely a lead in the investigative process.”
NYPD explained that “at the center of these reforms are explicit guidelines detailing the appropriate scope, uses, and procedures members of the service must follow when utilizing facial recognition technology,” and that the “Patrol Guide provision is effective immediately and is part of a continued effort to increase transparency and promote trust — all while enhancing the capabilities of investigators to better serve New Yorkers who have been victimized.”
“When you look at policing and the evolution of technology proliferation of cameras, I think it is self-evident that as businesses and private citizens deploy the use of cameras more and more, it logically leads to the next question of how you’re going to use those images once you recover someone committing a crime,” said Police Commissioner Dermot Shea.
“It is our responsibility to ensure investigators are equipped with effective technologies to bring justice to New Yorkers who have been victimized. When a crime occurs and there is video or images of a perpetrator committing a crime-and that perpetrator is unidentifiable-trained investigators take that image and compare it against lawfully obtained arrest photos,” Shea said in a statement, adding, “A facial recognition match is merely a lead; it is not probable cause. This new policy clearly defines the permissible use of facial recognition technology and it strikes the right balance between public safety and privacy.”
NYPD said “facial recognition technology must only be used for legitimate law enforcement purposes, and outlined the following authorized uses for employing facial recognition technology:
• To identify an individual when there is a basis to believe that such individual has committed, is committing, or is about to commit a crime;
• To identify an individual when there is a basis to believe that such individual is a missing person, crime victim, or witness to criminal activity;
• To identify a deceased person;
• To identify a person who is incapacitated or otherwise unable to identify themselves;
• To identify an individual who is under arrest and does not possess valid identification, is not forthcoming with valid identification, or who appears to be using someone else’s identification, or a false identification, or
• To mitigate an imminent threat to health or public safety (e.g. to thwart an active terrorism scheme or plot, etc).
Critics of NYPD’s use of facial recognition technology though were quick to say the technology is flawed, inaccurate, and does not always work, especially on people with darker skin.
The Urban Justice Center Surveillance Technology Oversight Project (STOP), a New York-based privacy group, swiftly condemned NYPD’s revised facial recognition policy, calling for a state-wide ban on facial recognition surveillance and mandatory reporting on the privacy impact of all NYPD surveillance tools.
“The policy does nothing to reform existing facial recognition practices, such as the use of software with higher error rates for New Yorkers of color, the routine alteration of images in photoshop, and the use of facial recognition for low-level offenses,” STOP said in a statement.
“After using facial recognition for a decade without any regulations, the NYPD’s policy is too little, too late,” said STOP Executive Director Albert Cahn. “This policy places no restrictions on some of the NYPD’s most problematic uses of facial recognition, such as reliance on software that misidentifies Black and Latin/X New Yorkers more often. At a moment when cities around the country are banning facial recognition, simply writing-down the status quo is not enough. We need limits on NYPD surveillance that will stop discrimination against communities of color and block wrongful convictions. Lawmakers in Albany and at City Hall must reign-in this high-tech stop-and-frisk.”
STOP said NYPD’s new facial recognition policy “came two weeks after it was revealed that the NYPD was the largest user of Clearview, a controversial facial recognition firm, which took photos from millions of Americans without consent to create their tracking tool. After initial denials that the department used Clearview, leaked company documents revealed that the NYPD ran more than 11,000 searches on the system.”
“This highlights that the NYPD has been operating without oversight of its surveillance technology for far too long,” Cahn responded, saying, “It shouldn’t take a decade to understand how New Yorkers’ privacy is being invaded. This is why we are once again calling on City Council Speaker Cory Johnson to allow an immediate vote on the Public Oversight of Surveillance Technology (POST) Act, which has been pending before the Council for 3 years.”
STOP is a leading advocate of the Public Oversight of Surveillance Technology (POST) Act, a New York City council bill that would require privacy protections for all NYPD surveillance programs and databases. Sponsored by a majority of the City Council and endorsed by the New York Times, the bill recently had a hearing before the Council’s Public Safety Committee.
Introduced by Councilwoman Vanessa Gibson, the bill would require NYPD to surrender data about its surveillance technology, require public comment, and compel the police commissioner to provide a surveillance effect and use policy report to the City Council.
“We will review the language of the bill when it becomes available, but to not use technology like this would be negligent,” NYPD Deputy Commissioner of Public Information Jessica McRorie said in a statement.
Following the New York Times’ investigation of NYPD’s use of the Clearview AI facial recognition tool, State Senator Brad Hoylman introduced a bill that would prohibit law enforcement from deploying any biometric surveillance technologies and would establish a task force to regulate its use.
“Facial recognition technology threatens to end every New Yorker’s ability to walk down the street anonymously,” Hoylman said in a statement. “New York must take action to regulate this increasingly pervasive and dangerously powerful technology before it’s too late.”
best practices | biometrics | facial recognition | nypd | police | privacy | regulation