FB pixel

DHS needs to better assess bias, privacy impact of AI surveillance

GAO offers roadmap to improve transparency, safeguard civil liberties
DHS needs to better assess bias, privacy impact of AI surveillance
 

While the U.S. Department of Homeland Security (DHS) is developing policies and procedures to address bias risk from technologies that use AI, an audit by the Government Accountability Office (GAO) found that the department does not have policies or procedures in place “to assess bias risks from the use of all detection, observation, and monitoring technology” that is deployed throughout its constituent agencies.

GAO further found that while “DHS law enforcement agencies may seek out advice from DHS’s Office for Civil Rights and Civil Liberties (CRCL) on bias issues related to technology use … there are no requirements to do so. As a result, CRCL’s level of review of detection, observation, and monitoring technologies has varied.”

“By developing policies and procedures to assess and address the risk of bias posed by DHS law enforcement agencies’ use of detection, observation, and monitoring technologies, CRCL could help ensure these technologies are not infringing on civil rights and civil liberties by introducing bias,” GAO said.

According to the GAO, these problems have existed since September 2023 when it reported that DHS – and the Department of Justice – “could do more to ensure training and privacy requirements are met. At that time, GAO made ten recommendations, including that Immigration and Customs Enforcement (ICE) establish and implement a process to periodically monitor whether staff using facial recognition services to support criminal investigations have completed training requirements.”

To carry out its mission of safeguarding the nation, DHS has increasingly turned to cutting-edge detection, observation, and monitoring technologies, from automated license plate readers to AI-driven facial recognition systems, significantly amplifying DHS’s surveillance capabilities. But, as GAO’s audit revealed, this technological arsenal carries profound implications for privacy and the risk of bias, creating a fraught intersection of security and civil liberties.

DHS’s deployment of over 20 types of surveillance technologies in public spaces represents a dramatic expansion of its observational reach. Many of these tools operate without the need for a warrant, leveraging the premise that individuals in public spaces cannot reasonably expect privacy. For instance, automated license plate readers and pole cameras, often mounted in plain sight or hidden on utility poles, capture a continuous stream of data, including license plate information and high-definition video footage. Additionally, analytic software, some powered by AI, enhances the ability to detect, track, and analyze patterns of activity.

Such systems, while invaluable in criminal investigations and border security operations, also evoke significant concerns regarding individual privacy. The ability of these technologies to collect and analyze personally identifiable information (PII) – from license plates to biometric data – raises the specter of a surveillance state where individuals’ movements and associations are meticulously tracked.

The GAO audit highlights that DHS has not uniformly implemented policies addressing key privacy protections across its array of technologies. GAO found that the foundational principles of privacy – data minimization, purpose specification, data security, retention limits, and accountability – are inconsistently applied across DHS’s component agencies such as ICE, Customs and Border Protection (CBP), and the Secret Service.

For example, while CBP’s policies for non-concealed body-worn cameras incorporate all six privacy protections outlined by DHS guidelines, its policies for pole cameras address none. Similarly, ICE’s policies for automated license plate readers robustly cover privacy safeguards, but those for pole cameras and other technologies are less comprehensive.

GAO said “DHS conducts privacy impact assessments to provide the public with information on how the agency plans to address key privacy protections,” but policies … are needed to direct employees in how they are to implement these privacy protections when using a particular technology. By requiring that policies for the use of each technology address key privacy protections, DHS agencies would have better assurance that the privacy protections are being implemented and that technology users are aware of their responsibilities to protect privacy.”

Without clear directives, GAO said, agency personnel may inadvertently misuse data or fail to implement necessary safeguards, exposing individuals to risks of unauthorized surveillance and data breaches. Furthermore, reliance on privacy impact assessments (PIAs) to inform employees about privacy obligations has proven insufficient, GAO said, noting that PIAs, while a critical transparency tool, often lack operational detail and are not tailored to the specific nuances of individual technologies.

Compounding these challenges is the phenomenon known as the “mosaic effect,” where disparate pieces of data from various sources are aggregated to create a detailed and revealing portrait of an individual. For example, automated license plate readers capturing vehicle locations, combined with facial recognition data and other surveillance inputs, can yield insights into a person’s daily routines, associations, and even political or religious affiliations.

GAO’s findings underscore that such capabilities, while not inherently unlawful, pose a chilling effect on constitutionally protected activities such as free speech and assembly. The knowledge or perception of being constantly monitored may also deter individuals from participating in public protests or engaging in other forms of dissent.

Beyond the obvious privacy concerns, GAO’s audit shines a critical light on the risks of bias in DHS’s deployment of technology. While AI-driven tools hold promise for enhancing the efficiency and accuracy of surveillance, they also amplify the risk of systemic and algorithmic biases.

GAO’s audit report highlights that existing DHS policies and procedures fall short in addressing these risks comprehensively. While the department is developing guidance to assess and mitigate AI-specific biases, it has not extended these efforts to encompass its broader suite of detection, observation, and monitoring technologies. This lack of oversight leaves significant gaps in safeguarding civil liberties, particularly for marginalized communities that are disproportionately affected by surveillance practices.

GAO noted that “while DHS is developing policies and procedures to help ensure that AI technologies are assessed for bias, it has no plans to develop such policies or procedures for other detection, observation, and monitoring technologies” because “it is not required to.” GAO added that “CBP, ICE, and Secret Service stated that they do not have policies or procedures to assess their use of all detection, observation, and monitoring technology specifically for bias.”

Meanwhile, the consequences of biased surveillance are far-reaching. Misidentifications and discriminatory targeting can lead to wrongful arrests, invasive investigations, and a breakdown of trust between law enforcement and the communities they serve. Moreover, decisions about where and how to deploy surveillance technologies -such as prioritizing high-crime areas – can inadvertently reinforce racial and economic disparities in law enforcement practices.

GAO’s audit does not merely document these shortcomings; it provides a roadmap for reform. Among its recommendations are the development of robust policies and procedures to assess and mitigate bias across all surveillance technologies, not just those powered by AI. Such measures would help ensure that DHS technologies do not disproportionately impact specific groups or infringe on civil rights.

The audit report also emphasizes the need for DHS to enhance its privacy protections by embedding them directly into operational policies for each technology. These policies should articulate clear guidelines on data collection, retention, and sharing, as well as enforce accountability through audits and supervisory reviews.

Furthermore, GAO calls for greater transparency and collaboration between DHS and its Office for Civil Rights and Civil Liberties. By requiring law enforcement agencies to consult CRCL on potential bias and privacy implications before deploying new technologies, DHS can foster a culture of accountability and ethical vigilance, GAO said.

The issues raised by the GAO audit extend beyond DHS, reflecting broader societal debates about the role of technology in governance and law enforcement. The rapid proliferation of surveillance tools has outpaced the development of legal and ethical frameworks to regulate their use. This regulatory lag leaves room for abuses of power and the erosion of public trust in institutions designed to protect and serve.

As GAO’s audit of DHS illustrates, the challenges of privacy and bias in technology are not insurmountable, but they do necessitate a concerted effort and commitment. By implementing the GAO’s recommendations, DHS can set a precedent for responsible and equitable use of surveillance technologies – balancing the imperatives of national security with the fundamental rights of the individuals it serves.

The question is not whether technology will continue to play a central role in law enforcement, but whether its use will be guided by principles that uphold the dignity, equality, and privacy of all individuals. The stakes, as the GAO audit makes abundantly clear, could not be higher.

Related Posts

Article Topics

 |   |   |   |   |   |   |   | 

Latest Biometrics News

 

Biden executive order prioritizes privacy-preserving digital ID, mDLs

In one of his last official acts as President, Joe Biden on Thursday issued a robust new executive order (EO)…

 

Problem with police use of facial recognition isn’t with the biometrics

A major investigation by the Washington Post has revealed that police in the U.S. regularly use facial recognition as the…

 

Sri Lanka considers another tender to solve passport crisis

Sri Lanka’s government is likely to open another tender for e-passports after a legal dispute caused a backlog of thousands…

 

Age assurance gets warm early response from U.S. Supreme Court

The U.S. Supreme Court appears to be leaning toward support for Texas’ age assurance law, as it weighs a host…

 

State of passkeys 2025: passkeys move to mainstream

More than 1 billion people have activated at least one passkey according to the FIDO Alliance – an astonishing number…

 

Ofcom publishes highly anticipated age assurance statement

Ofcom has published its Age Assurance and Children’s Access Statement. The much-anticipated statement includes guidance on “highly effective age assurance”…

Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Most Viewed This Week

Featured Company

Biometrics Insight, Opinion

Digital ID In-Depth

Biometrics White Papers

Biometrics Events