Project NOLA’s facial recognition push raises legal and civil rights questions

This week, the New Orleans City Council’s Criminal Justice Committee is expected to convene a hearing to address concerns about Project NOLA, a New Orleans-based nonprofit that manages a nationwide crime camera network that is at the center of a national debate about facial recognition, real-time surveillance, and the expanding role of private organizations in public policing.
The goal of the hearing is to determine whether Project NOLA’s facial recognition integration should be banned, regulated, or brought under municipal authority. The hearing comes on the heels of the New Orleans Police Department’s (NOPD) decision last month to pause its use of Project NOLA’s real-time facial recognition alerts pending a legal review.
Some city council members reportedly have floated the idea of amending the city’s biometric surveillance ordinance to cover private-public hybrid systems, while others have argued that the city should sever all law enforcement ties with Project NOLA until independent audits can verify legal compliance and algorithmic integrity.
Project NOLA took the controversial leap into biometric surveillance by embedding facial recognition technology into the network of privately owned cameras and linking the feeds to its National Real-Time Crime Center. But what began as a grassroots effort to help neighborhoods monitor criminal activity has become a nationwide experiment in AI-powered law enforcement with minimal transparency and growing legal scrutiny.
The technology driving this transformation was developed in-house by Project NOLA’s founder, Bryan Lagarde, a former New Orleans police officer turned economic crimes investigator who previously developed the Orleans Parish District Attorney’s office’s first investigations tracking database. Since launching Project NOLA in 2009, Lagarde’s mission has been to reduce crime through public-private partnerships.
NOLA provides residents and business owners with subsidized HD surveillance cameras, wires them into a central crime monitoring hub hosted at the University of New Orleans, and feeds footage to law enforcement in real time. Over the years, thousands of cameras have been installed across New Orleans and other cities, monitored continuously by a volunteer network of public safety analysts and sometimes by police officers themselves.
In 2022, Project NOLA began testing facial recognition capabilities without formal public announcement or citywide debate. The system quietly became active in September of that year, initially using a watchlist-based model in which cameras scan for individuals flagged for arrest or investigation.
Matches trigger automated alerts to Project NOLA’s mobile application, notifying subscribed law enforcement agencies. By 2023, Project NOLA had helped facilitate at least 34 arrests through these biometric alerts, ranging from felony warrants to minor offenses.
According to internal documentation and interviews cited by The Washington Post, the facial recognition network now spans more than 200 cameras and relies on advanced machine vision algorithms trained to distinguish faces even in suboptimal lighting or angled perspectives.
Unlike more centralized, government-operated systems, though, Project NOLA’s network is deeply decentralized. Each camera is technically owned by a private citizen or business, but collectively they form an extensive surveillance web monitored in real time. Project NOLA asserts that its facial recognition data is retained for no more than 30 days and is not sold or shared with private corporations.
Lagarde has framed the program as a practical tool to combat rising crime while respecting civil liberties. He’s emphasized that the New Orleans Police Department is not permitted to directly access the facial recognition interface, request its use, or control the cameras. “Our technology is designed to notify police only when a positive match is found,” Lagarde told local media. “It is a passive system with checks against abuse.”
Still, critics argue that these safeguards are cosmetic at best and point to the program’s lack of public oversight and compliance issues with New Orleans’ facial recognition ordinance. The city’s surveillance policy, adopted after years of community pressure and civil rights litigation, permits law enforcement use of facial recognition only in connection with violent crimes and requires all uses to be logged and reviewed.
Project NOLA’s operations certainly fall into a regulatory netherworld. Because the cameras are owned by private citizens and managed by a nonprofit rather than the city, they technically operate outside direct municipal control. Yet the data they generate is used to drive police action.
In April, this regulatory tension came to a boil. Following an internal audit prompted by citizen complaints and media coverage, the New Orleans Police Department paused its involvement with Project NOLA’s facial recognition system pending legal review.
According to sources familiar with the matter, the city’s Office of the Independent Police Monitor raised concerns that the system’s integration with active criminal investigations could constitute a violation of the ordinance’s explicit requirement that facial recognition only be used post-incident and with supervisory approval.
Civil rights attorneys have argued that Project NOLA’s real-time alerts and proactive identification mechanism effectively amount to live biometric surveillance, which the city has never authorized.
Meanwhile, the American Civil Liberties Union (ACLU) and other watchdog groups began voicing alarm over the project’s implications for racial profiling, algorithmic bias, and warrantless surveillance. Unlike government-operated systems, Project NOLA’s framework lacks procedural guardrails like audit logs, access reports, and appeals processes that are often mandated for public-sector facial recognition deployments.
“It’s surveillance without accountability,” said Vera Eidelman, a staff attorney with the ACLU’s Speech, Privacy, and Technology Project. “The fact that it’s facilitated by a nonprofit doesn’t make it any less invasive.”
The emergence of Project NOLA’s system is part of a broader trend in American law enforcement toward a privatized surveillance infrastructure. Across the country, police departments are increasingly relying on camera networks operated by homeowners, retail chains, and third-party vendors. While this allows cities to rapidly expand their surveillance reach without purchasing and maintaining equipment, it also fragments oversight and obscures the lines of authority.
In New Orleans, police access to Ring and Flock cameras has already sparked controversy, and the addition of facial recognition has prompted calls for stricter legislation. There are also questions about how Project NOLA curates its facial recognition watchlists. While the organization claims it uses publicly available mugshots and law enforcement-provided warrants, there is no published transparency report or independent audit of how these lists are compiled or how matches are verified before alerts are dispatched.
Despite the backlash, some law enforcement officials have praised Project NOLA’s system for its responsiveness and accuracy. Officers in neighboring parishes have begun exploring partnerships with the group, and a handful of suburban departments in states like Mississippi and Florida are reportedly in early talks to install Project NOLA cameras.
The organization now refers to its hub as the National Real-Time Crime Center and has rebranded its mission as a nationwide crime-reduction initiative powered by predictive analytics and citizen-contributed infrastructure.
This expansion though only raises more questions. Because Project NOLA is not a government agency, it is not directly subject to public records laws or federal privacy regulations like the Privacy Act. This means that citizens have limited recourse if they are incorrectly flagged, surveilled without cause, or unable to determine whether their likeness is stored in the system’s database.
At the same time, the National Real-Time Crime Center is actively shaping law enforcement decisions, effectively functioning as an intelligence node without the governance framework typically required of such entities. As a direct consequence, Civil liberties groups have called on Congress and the Federal Trade Commission to investigate Project NOLA’s compliance with consumer protection and biometric data regulations.
“Project NOLA should not exist.,” said Caitlin Seeley George, campaigns and managing director at Fight for the Future. “Neither a private citizen, nor law enforcement should have a tool that lets them constantly surveil people. This tech is dangerous, it doesn’t make us safer, and it should be banned.”
Article Topics
biometric identification | biometrics | criminal ID | facial recognition | New Orleans | police | Project NOLA | real-time biometrics | United States
Comments