FB pixel

Cherry-picking old stats undermines US border biometrics criticism

Cherry-picking old stats undermines US border biometrics criticism
 

Influential studies from 2018 and 2019 still hold a prominent place in many reports on demographic differentials or bias in face biometrics applications, despite their greatly diminished relevance and some obviously better sources of current information.

A report from Just Futures Law and Mijente alleges that the U.S. Department of Homeland Security and its agencies are utilizing artificial intelligence for a wide variety of immigration system applications that have tremendous impact on vulnerable people. They are doing so with little oversight, and in some cases in ways that do not comply with federal policies, including DHS’ own rules.

The 54-page “Automating Deportation: The Artificial Intelligence Behind the Department of Homeland Security’s Immigration Enforcement Regime” report argues that DHS’s AI processes increase the risk of bias in decision-making. The system was already discriminating against Black and Muslim immigrants, the groups say, and AI systems trained on biased data exacerbate a bad situation.

Facial recognition receives a significant amount of attention in the report, between tools Customs and Border Protection’s (CBP’s) CBPOne app, Immigration and Customs Enforcement’s (ICE’s) contract with Clearview AI, and ICE’s SmartLINK mobile application.

An inconvenient detail

Many of the arguments are backed by extensive authoritative references, and the overall point may be valid.

So why does the first paragraph on DHS’s use of biometrics refer to the 2019 version of an ongoing NIST study updated in 2022?

Perhaps the reference is used for the same reason as the 2019 report is used to generalize about facial recognition technology, despite warnings from the report’s lead author against doing so.

The 2022 version of NIST’s evaluation of demographic differentials in face biometrics shows that most algorithms still return higher error rates for darker-skinned people.  It also shows significant improvement among many algorithms, and little or no difference in error rates between people with lighter and darker skin for some algorithms.

A separate report from Just Security makes similar arguments about the use of AI by U.S. immigration agencies at the border. “AI at the Border: Racialized Impacts and Implications” suggests that police reform is needed to avoid AI exacerbating human rights abuses.

Similarly to the above report, Just Security states that “these algorithms have been found to inaccurately identify Black faces at a rate 10 to 100 times more than white faces.” The linked report refers to the same 2019 NIST evaluation, but tellingly, leaves out “many of” from in front of “these algorithms.” Including that qualifier would amount to an acknowledgement that if DHS doesn’t use the algorithms the stated error rate applies to, then the statistic does not apply.

Again, the overall point may be correct, but the argument is weakened by the spurious reference to evidence chosen according to ideological convenience.

The same pattern has been seen repeatedly, with rights advocacy groups too often selecting evidence that is misrepresented or even irrelevant.

The arguments about the use of AI for border control are important, and deserve to be considered in light of the best evidence available.

Related Posts

Article Topics

 |   |   |   |   |   |   | 

Latest Biometrics News

 

Biometric Update Podcast digs into deepfakes with Pindrop CEO

Deepfakes are one of the biggest issues of our age. But while video deepfakes get the most attention, audio deepfakes…

 

Know your geography for successful digital ID adoption: Trinsic

A big year for digital identity issuance, adoption and regulation has widened the opportunities for businesses around the world to…

 

UK’s digital ID trust problem now between business and government

It used to be that the UK public’s trust in the government was a barrier to the establishment of a…

 

Super-recognizers can’t help with deepfakes, but deepfakes can help with algorithms

Deepfake faces are beyond even the ability of super-recognizers to identify consistently, with some sobering implications, but also a few…

 

Age assurance regulations push sites to weigh risks and explore options for compliance

Online age assurance laws have taken effect in certain jurisdictions, prompting platforms to look carefully at what they’re liable for…

 

The future of DARPA’s quantum benchmarking initiative

DARPA started the Quantum Benchmarking Initiative (QBI) in July 2024 to expand hardware capabilities and accelerate research. In April 2025,…

Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Market Analysis

Most Viewed This Week

Featured Company

Biometrics Insight, Opinion

Digital ID In-Depth

Biometrics White Papers

Biometrics Events