Digital authoritarianism increasing as AI-enhanced surveillance reshapes federal policies under Trump

There is a common denominator that links recent developments in the Department of Homeland Security (DHS), Department of Justice (DOJ), and the broader federal AI governance landscape under the Trump administration. And that denominator is the aggressive expansion of AI into high-risk areas of surveillance and enforcement without sufficient oversight, transparency, or accountability.
This convergence of unchecked technology with diminished civil safeguards, particularly amid a politicized executive branch, has placed democratic norms under siege. And nowhere is this shift more visible than in the rapid adoption of AI technologies across U.S. federal agencies under the Trump administration’s new AI-centric directives and their broader national security implications.
A recent analysis of AI’s dark applications by Brookings Institution Senior Fellow Darrell M. West underscores the political volatility of surveillance technologies when harnessed without robust oversight. West details how machine learning and data aggregation enable state authorities to track behavior and suppress dissent, a practice most dramatically illustrated by the Chinese government’s use of facial recognition, social media tracking, and real-time analytics to surveil dissidents.
While such systems were once viewed as foreign threats, their domestic proliferation has become increasingly plausible and concerning. Reports indicate that DHS has deployed social media monitoring tools that use AI to flag potential threats based on vague definitions like “extremist rhetoric” and “antisemitic activity” that are devoid of any standard to delineate protected speech from actual threats.
“There is a dark side to AI … given its potential for nefarious purposes,” West said. “Concerns around privacy, safety and security have grown as the technology is used to analyze confidential material and amplify false narratives as part of disinformation campaigns. Due to its scalability and capacity to examine large data sets, it can study people’s behavior and act on that information.”
The Environmental Protection Agency (EPA) has already been implicated in internal surveillance claims, allegedly using AI to monitor employee communications for signs of disloyalty to President Donald Trump and his Department of Government Efficiency head, Elon Musk. The EPA has denied the allegations.
West’s warning echoes even louder in the context of DOJ’s 2024 report on AI in criminal justice. While recognizing the potential for AI to enhance accuracy and efficiency in areas like forensic analysis and biometric identification, DOJ also issued stern warnings about bias, opacity, and the risk of constitutional infringements. Predictive policing, risk assessment tools, and automated surveillance systems are highlighted as especially vulnerable to misuse in politically charged climates.
The DOJ report examined four domains where AI is reshaping the justice system: identification and surveillance, forensic science, predictive policing, and risk assessment. These applications offer benefits like greater efficiency, improved forensic reproducibility, and the ability to process vast quantities of data, but the tradeoffs are stark, it noted.
“The 77-page report’s analysis of key applications addresses persistent operational and ethical considerations that exist independently of the regulatory framework. While the policy environment has evolved, the core technical, operational, and civil rights challenges identified in the report continue to warrant consideration,” DOJ’s Council on Criminal Justice said.
Biometric technologies increasingly are being deployed for real-time surveillance despite studies showing that their accuracy varies by race, gender, and age, often amplifying existing systemic biases. Even when used for forensic purposes which aim to remove subjectivity from evidentiary analysis, they still face challenges in data quality, explainability, and courtroom admissibility.
Predictive policing and risk assessment models intended to forecast crimes or recidivism by drawing from historical data often reflect and perpetuate past injustices. These tools have been criticized for turning flawed records into self-fulfilling prophecies, further entrenching racial profiling and over-policing of marginalized communities. DOJ rightly emphasized that these tools must be subject to continuous evaluation, community engagement, and clear human oversight, a standard not met under the current political regime.
No federal department embodies the convergence of AI, surveillance, and civil liberties violations more clearly than the Department of Homeland Security. A recent analysis of the DHS AI inventory by Paromita Shah, co-founder and executive director of Just Futures Law, revealed a vast and obscure digital arsenal that is hidden in bureaucratic shadows. The report highlights nearly 200 AI applications, many previously unknown to the public, that span visa vetting, border enforcement, deportation logistics, and biometric identification.
Among the most alarming of these programs is Immigration and Customs Enforcement’s (ICE) use of the “Hurricane Score” and Risk Classification Assessment to make detention and surveillance decisions. These systems, reportedly AI-powered, determine whether migrants are released or subjected to electronic monitoring without disclosing the algorithm’s role to the individual or their legal counsel.
Biometric Update reported last week how Geo Group’s technology has become a cornerstone of Trump’s immigration crackdown, and that ICE this month awarded a $30 million contract to Palantir Technologies to develop “ImmigrationOS,” a comprehensive digital platform aimed at streamlining and expanding the agency’s deportation apparatus.
Meanwhile, Customs and Border Protection’s use of facial recognition and drone surveillance at border checkpoints alongside mobile device scanning and social media tracking via tools like Babel and Fivecast-Onyx illustrates the breadth of AI surveillance directed at migrant populations. And, as recent incidents have shown, U.S. citizens and legal resident citizens returning to the U.S. from travel abroad.
Notably, these programs have continued to expand even after having been flagged for potential civil rights violations, with DHS frequently self-certifying compliance and denying third-party oversight.
Shah’s findings also exposed DHS’s failure to provide accurate procurement data, contractor identities, or comprehensive usage descriptions. Some programs were listed as “retired” but found to be continuing under different names or definitions – an accounting sleight of hand that conceals rather than clarifies DHS’s AI footprint.
“In the course of our research, we also discovered that DHS was already violating existing policies and laws related to transparency, oversight, and its obligations to monitor its products for AI harm,” Shah said. “Our team met with DHS to share our findings and organized letters demanding that DHS shutter AI to mitigate further harm.”
Shah said “the pressure exerted by national civil rights groups led to the termination of some AI programs,” but that “DHS’s review and assessment of its AI inventory, including direct responses to our inquiries and public pages that publicly named AI tools and uses that had never previously been identified.”
Continuing, Shah said, “In the last days of the Biden administration, DHS released its most complete inventory, revealing new AI uses that it had kept hidden. It was the only requirement that DHS was able to meet out of a long list of requirements from the Biden administration’s Executive Orders on AI directed to federal agencies, many of which were fast-tracking AI without considering whether it would hurt the public or violate civil rights protections. Just Futures Law went through the most recent DHS inventory to share these insights with the public.”
The shift in federal AI policy under Trump has accelerated this technological arms race by emphasizing deregulation and market dominance.
“The federal government’s approach to AI governance underwent a significant shift in early 2025,” DOJ said. “On January 20, 2025, Executive Order 14148 revoked the previous Biden-Harris AI Executive Order 14110 of October 30, 2023. This revocation was followed by Executive Order 14179 on January 23, 2025. Titled. Removing Barriers to American Leadership in Artificial Intelligence, the order established new U.S. policy priorities focused on enhancing America’s AI dominance.”
“This policy shift,” DOJ said, “coincided with the announcement of the Stargate Project, a $500-billion private sector initiative led by SoftBank and OpenAI, with participation from major technology partners such as Oracle, NVIDIA, Microsoft, and Arm. The project aims to build extensive AI infrastructure across the country, beginning in Texas, with goals of creating hundreds of thousands of jobs and strengthening America’s strategic AI capabilities. This massive private investment aligns with the new Trump administration’s emphasis on reducing regulatory barriers and promoting U.S. leadership in AI development through market-driven approaches.”
These policy changes not only revoked Biden-era guidelines rooted in civil rights and ethical AI principles, but it also gutted accountability mechanisms within federal agencies. Under Trump, DHS’s Office for Civil Rights and Civil Liberties has reportedly lost much of its functional power, even as AI applications scale up for politically motivated enforcement. The risks multiply when law enforcement tools are turned inward toward protesters, whistleblowers, journalists, or entire communities labeled as suspect based on opaque AI logic.
The DOJ report outlined a comprehensive framework for responsible AI use that emphasizes maintaining centralized AI inventories, establishing clear oversight structures, enforcing rigorous testing and validation, and ensuring human control in high-stakes decisions. But these ideals remain aspirational in the politically driven environment created by the Trump administration in which political loyalty supersedes transparency and constitutional norms are reinterpreted through the lens of executive expediency.
The ongoing expansion of AI across federal agencies – absent robust legislative constraints, judicial review, or independent audits – represents more than a technological transformation. It is a structural reconfiguration of power, where surveillance becomes ambient, discretion becomes algorithmic, and accountability becomes elusive.
The convergence of DHS’s cloudy AI arsenal, DOJ’s cautionary criminal justice findings, and West’s portrait of the creeping surveillance state collectively paints a picture of a society on the brink of digital authoritarianism. This transformation is unfolding not with fanfare, but through silent shifts in code, contracts, and command chains. As public and private systems integrate biometric scans, social media tracking, predictive analytics, and workplace monitoring into the fabric of daily life, democratic values risk being redefined by the logic of surveillance.
Article Topics
AI | biometric identification | biometrics | data privacy | DHS | law enforcement | surveillance | U.S. Government
Comments