Unlawful use of facial recognition by police erodes public trust in biometrics

The New York Police Department arrested a protestor in New York City with the help of Clearview AI’s facial recognition technology (FRT), skirting a policy that puts strict limits on its use.
Instead of running the search itself, says a report from The City, the NYPD outsourced a Clearview search through a fire marshal at the city’s fire department, then used the results to identify a pro-Palestinian protester at Columbia University accused of throwing a rock at a pro-Israeli protester during an April protest. Zuhdi Ahmed, a 21-year-old pre-med student at the City University of New York (CUNY), was arrested and charged with a misdemeanor of second degree aggravated harassment.
The City notes that the NYPD can use facial recognition tech “only for searches of arrest and parole photos.” Ahmed was identified through photos of him at his high school formal, a school play and his high school graduation.
Furthermore, New York’s POST Act requires the NYPD to report publicly on its use of and policies regarding surveillance technologies. It has regularly been found noncompliant in this matter, according to the City Department of Investigation.
Two months after his arrest, a criminal court dismissed the case against Ahmed. The ruling by judge Valentina Morales includes details of an email exchange between the NYPD case detective and the fire marshal who provided access to Clearview, with the subject line “Hate Crime Assault Investigation,” and the note, “Thanks for your help, I’m watching the graduation now.” Eleven days later, the marshall responded: “Saw the news. Good work. Glad you grabbed him.”
Morales’s ruling notes that “the use of facial recognition technology that compares probe images against images outside the photo repository is prohibited” by the NYPD, and determined that “the emails provide abundant indication that the FDNY’s own police powers were invoked and utilized in the investigation of the crime charged here.”
“It is evident that the investigatory steps described in the emails clearly contravene official NYPD policy concerning the use of facial recognition.”
NY police must release all documents related to FRT database
A disproportionate amount of responsibility for trust in biometric technologies belongs to law enforcement. Concerns over misuse of tools like facial recognition almost always center on the potential for overreach by police or other authorities. Regulations aim to limit that. As such, when authorities break the rules and do with the technology what everyone fears they might, a pillar of public trust in biometrics gains yet another crack.
The New York Police Department has deployed facial recognition technology (FRT) since 2011. According to Amnesty International, between 2017 and 2019, the NYPD used the tech in 22,000 cases. Details on those cases have been kept confidential, but in February 2025 an appeals court ruled that the NYPD must disclose all documents related to maintenance of its FRT database.
Transparency, however, does not come naturally to many police forces. Efforts to restrict information create conditions for distrust, which is then justified by incidents in which authorities are found to be skirting the rules.
Cleveland murder case hinges on legally questionable use of facial recognition
Another instance of sloppy handling of FRT by police comes from Cleveland, where next month, the 8th Ohio District Court of Appeals will hear arguments in a murder case that Cleveland.com says could set a precedent for how law enforcement uses facial recognition when investigating crimes.
In 2024, Cleveland resident Blake Story was robbed at gunpoint and shot to death while out walking. Police used facial recognition to identify the suspect, Qeyeon Tolbert, then raided his apartment and found what they believe is the handgun used to kill Story.
Lawyers argue that the facial recognition match – achieved using Clearview AI – is inadmissible in court, noting that “the picture used to match the suspect to Tolbert was taken of the defendant shopping in a convenience store.”
A judge agreed with them and excluded the FRT results from the trial.
Oral arguments in the appeal will begin on August 22, with prosecutors saying the case is a “lost cause” unless that ruling is overturned.
Law enforcement clearly wants to use FRT to pursue cases it considers justified – in some cases, murder, in others, throwing a rock at a political protest.
This enthusiasm seems to win out over abiding by the rules (inasmuch as rules exist; Cleveland police do not have a policy governing the use of facial recognition). But with every instance in which police do not have an airtight legal justification for using FRT, the scale tips a little more toward a society that thinks facial recognition is a tool for institutional control – and keeps getting proven right.
Moreover, it raises the question of how sustainable Clearview in particular is as a pillar of law enforcement. According to WBUR radio, 3,000 police departments in the U.S. use its tech – about one in six of all departments in the country. Yet it continues to face legal challenges to its operations, and has ceded 23 percent of its ownership through a U.S. lawsuit. In April, it booted high profile founder and CEO, Hoan Ton-That. Even if Clearview is useful and legal for police, it is hardly a model of stability, and whether or not it can survive the ongoing litigation storm is a pending question.
Article Topics
biometric identification | biometrics | Clearview AI | criminal ID | facial recognition | New York | New York City | NYPD | Ohio | police | United States







Comments