DHS funding law quietly advances biometric, surveillance infrastructure

The Homeland Security and Further Additional Continuing Appropriations Act, which became law on April 30 and broke the months-long shutdown of the Department of Homeland Security (DHS), does not read like a sweeping surveillance bill. But a close reading of the law shows that Congress has again funded and preserved several of the technical building blocks that make up DHS’ expanding surveillance architecture.
The law’s most direct biometric provision appears in Section 403, under Title IV, Research, Development, Training, and Services.
This section authorizes federal funds made available to the U.S. Citizenship and Immigration Services (USCIS) to be used for “the collection and use of biometrics” taken at a USCIS Application Support Center that is “overseen virtually” by USCIS personnel using appropriate technology.
In practical terms, the language supports a model in which biometric collection can occur at an application support site without traditional in-person federal oversight, so long as USCIS personnel are virtually supervising the process.
The provision matters because Application Support Centers are central to the immigration benefits process. They are where applicants may provide fingerprints, photographs, signatures, or other identity information required for background checks and adjudication.
The law does not spell out the specific biometric modalities covered by Section 403, but its use of the broader word “biometrics” gives USCIS flexibility.
The key policy change is not simply that biometrics may be collected, which is already part of the immigration system. It is that Congress is allowing federal funds to support biometric collection and use in a virtually overseen environment.
That is a notable expansion and points toward a more distributed biometric infrastructure, one in which biometric capture can be supervised through technology rather than only through physical federal presence.
For USCIS, that could mean lower staffing burdens and more flexible processing capacity.
For privacy and civil liberties advocates, it raises familiar questions about data quality, identity assurance, contractor involvement, remote supervision, error handling, and what safeguards apply when biometric collection is pushed into more technologically mediated settings.
Last month, USCIS issued a Request for Information from industry on remote document authentication and identity verification technology.
The law’s clearest surveillance provision appears in Section 207. It states that none of the funds made available for Border Security Assets and Infrastructure under Customs and Border Protection’s (CBP) Procurement, Construction, and Improvements account may be used for the procurement or deployment of surveillance systems that are not autonomous.
That language is striking. Rather than merely permitting autonomous surveillance systems, the provision bars the use of certain border security infrastructure funds for surveillance systems that are not autonomous. The effect is to push border surveillance procurement toward autonomous systems.
The enacted text does not define the term, instead incorporating a definition from another law. But the policy signal seems clear. Congress is favoring autonomous surveillance over non-autonomous systems.
That is where the AI relevance comes in. The statute does not expressly say “AI,” “machine learning,” “computer vision,” or “facial recognition” in Section 207. But autonomous surveillance systems commonly depend on automated sensing, automated detection, algorithmic classification, remote monitoring, and software-assisted alerts.
In the border context, autonomy often means systems that can detect movement, classify objects, cue cameras, generate alerts, or integrate sensor data with less direct human operation.
The law’s AI significance is therefore indirect but substantial. It funds the infrastructure layer into which AI-enabled analytics can be inserted.
CBP receives broad operational and procurement funding that intersects with surveillance in other ways as well.
The law provides $222.9 million for CBP procurement, construction, and improvements, including procurement of marine vessels, aircraft, and unmanned aerial systems. Unmanned aerial systems are not inherently biometric systems. They are also not necessarily AI systems. But they are surveillance platforms.
Their significance depends on payloads, mission, sensors, data retention rules, data sharing practices, and whether video or still imagery is analyzed by humans, automated tools, or AI-enabled object recognition systems.
In the broader DHS environment, drones can become part of a larger surveillance network when linked to ground sensors, towers, targeting systems, real-time operations centers, or databases used by federal and local law enforcement.
Another major surveillance-related item appears in Section 109, which provides $20 million for the procurement, deployment, and operations of body-worn cameras for agents and officers performing enforcement activities under the Immigration and Nationality Act.
The secretary must provide appropriators with a spend plan within 30 days of enactment.
Body-worn cameras are often presented as accountability tools, particularly in law enforcement and immigration enforcement contexts. But they are also mobile surveillance systems.
Their privacy implications depend heavily on activation policies, retention rules, redaction practices, access controls, whether footage can be used for investigations beyond the initial encounter, and whether still images or video can be searched or analyzed with biometric tools.
The law appropriates money for cameras, but the text itself does not impose detailed restrictions on biometric analysis, facial recognition, or secondary use of footage.
The law’s intelligence and information-sharing provisions are also relevant. DHS’s Office of Intelligence and Analysis (I&E) and Office of Homeland Security Situational Awareness receive $340.8 million for operations and support, including up to $2 million for secure space at fusion centers.
Fusion centers sit at the intersection of federal, state, local, tribal, territorial, and private-sector information sharing.
Section 107 places a limit on I&E, barring funds from being used to conduct a covered activity as defined by the Intelligence Authorization Act for Fiscal Year 2025.
But the same section makes clear that it does not limit legal, privacy, civil rights, or civil liberties oversight and does not prohibit I&A personnel from sharing intelligence information with, or receiving information from, foreign, state, local, tribal, territorial governments, the private sector, other federal agencies, or DHS components.
That carveout preserves the information-sharing function. From a surveillance standpoint, the importance is not only what DHS collects directly, but how information flows through fusion centers and interagency networks.
In practice, systems involving biometrics, license plate readers, watchlists, suspicious activity reporting, open-source intelligence, drone feeds, or other data sources can become more powerful when connected to shared intelligence environments.
The bill also contains language related to CBP targeting and vetting. Section 206 allows funds to be used to alter operations within CBP’s National Targeting Center, but bars reductions in anticipated or planned vetting operations at existing locations unless specifically authorized by a later statute.
The National Targeting Center is a major node in DHS’s data-driven border and travel security apparatus.
The enacted language does not describe the tools used there, nor does it mention AI. But vetting and targeting are areas where algorithmic tools, database checks, automated risk assessment, and identity-resolution systems can play an important role.
The Transportation Security Administration (TSA) receives another set of technology-focused appropriations. The law provides more than $10.6 billion for TSA operations and support, $330.2 million for procurement, construction, and improvements, and $24 million for research and development.
Section 211 requires TSA to submit a capital investment plan for new and replacement transportation security equipment, a five-year technology investment plan, and an Advanced Integrated Passenger Screening Technologies report.
Again, the text does not say AI. But the screening environment is increasingly defined by automation, identity verification, advanced imaging, explosives detection, risk-based screening, and passenger data systems.
The requirement for an Advanced Integrated Passenger Screening Technologies report suggests continued congressional oversight of TSA’s evolving checkpoint and screening technology ecosystem, even if the enacted text does not specify biometric facial recognition or AI-enabled screening.
The report was originally required under the FY 2019 DHS funding bill. TSA was supposed to have submitted a detailed report on passenger and baggage screening technologies not later than 180 days after the bill was enacted.
The report is to “include a useful description of existing and emerging technologies capable of detecting threats concealed on passengers and in baggage, as well as projected funding levels for each technology identified in the report for the next five fiscal years.”
Article Topics
biometrics | DHS | TSA | U.S. Government | USCIS





Comments