FB pixel

Cherry-picking old stats undermines US border biometrics criticism

Cherry-picking old stats undermines US border biometrics criticism
 

Influential studies from 2018 and 2019 still hold a prominent place in many reports on demographic differentials or bias in face biometrics applications, despite their greatly diminished relevance and some obviously better sources of current information.

A report from Just Futures Law and Mijente alleges that the U.S. Department of Homeland Security and its agencies are utilizing artificial intelligence for a wide variety of immigration system applications that have tremendous impact on vulnerable people. They are doing so with little oversight, and in some cases in ways that do not comply with federal policies, including DHS’ own rules.

The 54-page “Automating Deportation: The Artificial Intelligence Behind the Department of Homeland Security’s Immigration Enforcement Regime” report argues that DHS’s AI processes increase the risk of bias in decision-making. The system was already discriminating against Black and Muslim immigrants, the groups say, and AI systems trained on biased data exacerbate a bad situation.

Facial recognition receives a significant amount of attention in the report, between tools Customs and Border Protection’s (CBP’s) CBPOne app, Immigration and Customs Enforcement’s (ICE’s) contract with Clearview AI, and ICE’s SmartLINK mobile application.

An inconvenient detail

Many of the arguments are backed by extensive authoritative references, and the overall point may be valid.

So why does the first paragraph on DHS’s use of biometrics refer to the 2019 version of an ongoing NIST study updated in 2022?

Perhaps the reference is used for the same reason as the 2019 report is used to generalize about facial recognition technology, despite warnings from the report’s lead author against doing so.

The 2022 version of NIST’s evaluation of demographic differentials in face biometrics shows that most algorithms still return higher error rates for darker-skinned people.  It also shows significant improvement among many algorithms, and little or no difference in error rates between people with lighter and darker skin for some algorithms.

A separate report from Just Security makes similar arguments about the use of AI by U.S. immigration agencies at the border. “AI at the Border: Racialized Impacts and Implications” suggests that police reform is needed to avoid AI exacerbating human rights abuses.

Similarly to the above report, Just Security states that “these algorithms have been found to inaccurately identify Black faces at a rate 10 to 100 times more than white faces.” The linked report refers to the same 2019 NIST evaluation, but tellingly, leaves out “many of” from in front of “these algorithms.” Including that qualifier would amount to an acknowledgement that if DHS doesn’t use the algorithms the stated error rate applies to, then the statistic does not apply.

Again, the overall point may be correct, but the argument is weakened by the spurious reference to evidence chosen according to ideological convenience.

The same pattern has been seen repeatedly, with rights advocacy groups too often selecting evidence that is misrepresented or even irrelevant.

The arguments about the use of AI for border control are important, and deserve to be considered in light of the best evidence available.

Related Posts

Article Topics

 |   |   |   |   |   |   | 

Latest Biometrics News

 

RIVR results show biometric liveness detection effectiveness highly variable

The state of the art in biometric presentation attack detection (PAD) is better than document validation, but far worse than…

 

Court signals NetChoice faces tougher road on age check laws

The legal campaign against state social media age check laws is entering a more precarious phase for NetChoice and the…

 

Spain’s AEPD fines Yoti $1.1M for biometric data handling violations

Yoti has been fined 950,000 euros (roughly US$1.1 million) by Spanish data protection regulator AEPD for the handling of biometrics…

 

UK gov’t to design and build national digital ID in-house

The UK government plans to design, build and run its digital ID in-house, rather than outsourcing it to a private-sector…

 

UK Lords reject bid to block police facial recognition searches of DVLA database

The UK’s House of Lords has voted down an attempt to prevent the Driver and Vehicle Licensing Agency (DVLA) database…

 

India is leading example of digital infrastructure, IMF says

Digital public infrastructure (DPI) is being recognized as a foundational public good and a new paper from the International Monetary…

Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Biometric Market Analysis and Buyer's Guides

Most Viewed This Week

Featured Company

Biometrics Insight, Opinion

Digital ID In-Depth

Biometrics White Papers

Biometrics Events