Cherry-picking old stats undermines US border biometrics criticism

Influential studies from 2018 and 2019 still hold a prominent place in many reports on demographic differentials or bias in face biometrics applications, despite their greatly diminished relevance and some obviously better sources of current information.
A report from Just Futures Law and Mijente alleges that the U.S. Department of Homeland Security and its agencies are utilizing artificial intelligence for a wide variety of immigration system applications that have tremendous impact on vulnerable people. They are doing so with little oversight, and in some cases in ways that do not comply with federal policies, including DHS’ own rules.
The 54-page “Automating Deportation: The Artificial Intelligence Behind the Department of Homeland Security’s Immigration Enforcement Regime” report argues that DHS’s AI processes increase the risk of bias in decision-making. The system was already discriminating against Black and Muslim immigrants, the groups say, and AI systems trained on biased data exacerbate a bad situation.
Facial recognition receives a significant amount of attention in the report, between tools Customs and Border Protection’s (CBP’s) CBPOne app, Immigration and Customs Enforcement’s (ICE’s) contract with Clearview AI, and ICE’s SmartLINK mobile application.
An inconvenient detail
Many of the arguments are backed by extensive authoritative references, and the overall point may be valid.
So why does the first paragraph on DHS’s use of biometrics refer to the 2019 version of an ongoing NIST study updated in 2022?
Perhaps the reference is used for the same reason as the 2019 report is used to generalize about facial recognition technology, despite warnings from the report’s lead author against doing so.
The 2022 version of NIST’s evaluation of demographic differentials in face biometrics shows that most algorithms still return higher error rates for darker-skinned people. It also shows significant improvement among many algorithms, and little or no difference in error rates between people with lighter and darker skin for some algorithms.
A separate report from Just Security makes similar arguments about the use of AI by U.S. immigration agencies at the border. “AI at the Border: Racialized Impacts and Implications” suggests that police reform is needed to avoid AI exacerbating human rights abuses.
Similarly to the above report, Just Security states that “these algorithms have been found to inaccurately identify Black faces at a rate 10 to 100 times more than white faces.” The linked report refers to the same 2019 NIST evaluation, but tellingly, leaves out “many of” from in front of “these algorithms.” Including that qualifier would amount to an acknowledgement that if DHS doesn’t use the algorithms the stated error rate applies to, then the statistic does not apply.
Again, the overall point may be correct, but the argument is weakened by the spurious reference to evidence chosen according to ideological convenience.
The same pattern has been seen repeatedly, with rights advocacy groups too often selecting evidence that is misrepresented or even irrelevant.
The arguments about the use of AI for border control are important, and deserve to be considered in light of the best evidence available.
Article Topics
accuracy | ACLU | biometric-bias | biometrics | border security | DHS | facial recognition | NIST
Comments