US continues lively debate over facial recognition and biometrics in law enforcement
Lawmakers in the state of Virginia debating the continued use of facial recognition by police and a paper presented by the Brookings Institute arguing that surveillance and data collection harms people of color disproportionately retrain the limelight on the ever-evolving national discussion over the use of facial recognition and biometrics in law enforcement. Though criticism for biometrics in policing persists, a survey from Veritone suggest that when police use technology like body cameras and facial recognition, it may actually build trust with communities.
Virginia lawmakers debate use of facial recognition by police
Legislators across both sides of the partisan aisle in Virginia are continuing to debate a bill that continues the use of facial recognition by police.
WVTF, a Virginia affiliate of NPR, reports lawmakers remain divided over a bill introduced by Senator Scott Surovell that would allow police to use the biometric technology.
“This is not Minority Report. The bill does not authorize surveillance. It does not authorize sort of constant monitoring,” Surovell says. The Democratic state senator says it will be used in “discrete circumstances” that will not constantly monitor faces in real-time to designate people to charge.
The bill has found opposition and support from bipartisan groups in both chambers of the state legislature. Challengers include Republican Speaker Todd Gilbert and Black Caucus Chairman Lamont Bagby, a Democrat. Supporters are named as former Speaker Eileen Filler-Corn, a Democrat; and House Majority Leader Terry Kilgore, a Republican.
An amendment from Governor Glenn Youngkin will keep the issue on the docket for weeks to come. Governor Youngkin added an amendment mandating additional training for facial recognition, which will be debated later in April, according to WVTF.
Data privacy vital to protect people of color: Brookings paper
A paper written by members of the Brookings Institute recommends heightened data privacy protections from facial recognition and biometrics in the U.S. as a remedy against law enforcement using the technology to unfairly target and discriminate against people of color.
The paper, titled, ‘Police surveillance and facial recognition: Why data privacy is imperative for communities of color,’ is written by Nicol Turner Lee, a senior fellow in Governance Studies, and Caitlin Chin, a fellow at the Center for Strategic and International Studies. It first recounts a lineage of surveillance by law enforcement in the U.S. on communities like Black civil rights activists, Asian-Americans, Muslims, and Latinos. From there, the authors explain the growth in the adoption of technologies like facial recognition and machine learning algorithms that have “drastically enlarged the precision and scope of potential surveillance,” and pose risks to civilians in the criminal justice system and further biases that affect communities of color with the vacuum of privacy protections in the state and federal level.
On the proliferation of facial recognition, the paper cites a report from the Government Accountability Office (GAO) that discovered in 2021 that about half of federal U.S. law enforcement agencies used the biometric. Research from Georgetown Law estimated approximately a quarter of state and local law enforcement agencies had access.
The private sector is also viewed as a culprit in the phenomenon. The report names Clearview AI as a growing force in law enforcement with billions of photos in its database compared to 640 million held by the FBI. Others in the paper include Vigilant Technologies and ODIN Intelligence, which support for law enforcement by selling license plate collection or facial recognition technologies to various agencies. Additionally, big tech corporations like Apple, Google, Microsoft, and Facebook are seen as widely complying with legal requests from law enforcement agencies, alongside law enforcement use of data analytics and geolocation services from Venntel, X-Mode, and Babel Street; trawling through social media for photos and posts; the expansion of surveillance in public areas; and the growth of private surveillance.
Among surveillance technologies, facial recognition is called “the most daunting of them all” in the paper. It cites research from MIT that discovered higher rates of misclassification for darker skinned individuals compared to lighter skinned ones, but generalizes across a study from the NIST that found algorithms for 189 commercial facial recognition programs developed in the U.S. were “significantly more likely to return false positives or negatives for Black, Asian, and Native American individuals compared to white individuals.” At the time, the researchers cautioned specifically against over-generalization from the results, and the same study found that for the best algorithms “false positive differentials are undetectable.”
The paper then writes, “When disparate accuracy rates in facial recognition technology intersect with the effects of bias in certain policing practices, Black and other people of color are at greater risk of misidentification for a crime that they have no affiliation with.”
To address these problems, the authors laws that establish guardrails for executive agencies that perform surveillance. The paper lists bills like the ‘Facial Recognition and Biometric Technology Moratorium Act’ that could rectify the gaps, but would require additional solutions. For the state level, the paper suggests looking at frameworks for “acceptable uses of facial recognition” like the one from Georgetown University, and additional training.
But the key antidote to the problem would be a comprehensive federal privacy law that regulates the data practices of private companies, the authors say. Such a law would entail citizens the right to access and delete their personal information from the database of businesses, and limit data collection, storage, and retention of data by private companies. Another suggestion is that Congress “direct the Federal Trade Commission to study the impact of biometric information, including algorithmic outcomes, on civil rights in highly sensitive scenarios such as law enforcement.” A final proposal is that businesses have their algorithms audited if they analyze personal data of citizens, use facial recognition or AI.
The paper finishes by stating, “That is why privacy protections are more important than ever for all Americans—and they are especially so for the communities of color that may suffer the greatest consequences from their absence.”
Veritone says technology can grow transparency and trust
Though the Brookings Institute paper notes the difficulties of reconciling technology with fair law enforcement, a survey from voice biometrics company Veritone suggests that technology like biometrics may help in building transparency and trust with law enforcement agencies.
The survey finds that 61 percent of respondents say they trust the police to use technology to better identify suspects. With the growing use of biometrics and facial recognition by police, Veritone says it is “noteworthy.” It also finds that 48 percent of respondents say that body-worn cameras have the potential to make communities safer, the highest technology on the list.
But 42 percent of respondents say a lack of perceived transparency from police has hurt their opinion of law enforcement over the last five years. With transparency with policing remaining a concern, police sources quoted in the survey suggest that technology can help. “Technology plays an incredibly important role in making people feel safe, solving crimes, and building trust and legitimacy within the community. The more information we can provide at the public’s fingertips, I think the better off we’ll be,” says Christopher Bailey, assistant chief of the Indianapolis Metropolitan Police Department.
To combat a common source of mistrust, Veritone says AI can play a role. The sluggish release of body camera footage due to the redaction of innocent bystanders and other identifiable data is said to harm public perceptions of the police, as they could be seen as concealing information. But AI can expedite this process, as Chief Jorge Cisneros of the Anaheim California Police Department says, “It takes time to redact personal and identifying information as required by law from a video or image of a police incident. But we have tools and capabilities that allow us to do it faster. As a chief, I have a better ability to release that information more quickly and push it out when I believe it will help answer questions within the community, or help immediately improve public safety.”