Don’t remove AI’s training wheels at the border yet — authors
Two authors looked at digital identification and surveillance used by multiple nations around the world and found that generally democratic governments are following in the for-profit, secretive steps established for face biometrics in the United States.
Both writers worry that Western developed economies see AI biometrics as a border force multiplier or even a replacement for humans. Being subjective, as people are, is a liability, according to automation proponents.
Indeed, the controversial iBorderCtrl system billed as detecting liars is being tested in the European Union as part of that body’s sprawling biometric immigration control program.
The two articles, a commentary in The Conversation by Niamh Kinchin, a senior lecturer at the University of Wollongong’s law school; and analysis in Canada’s The Walrus by investigative reporter Hilary Beaumont, largely focus on different regions of the world.
Kinchin dwells for the most part on EU and Australian programs. Beaumont writes mostly about Canada’s biometric-based immigration-control campaigns.
But neither strays far from the U.S.-Mexico border, which has seen waves of publicly funded, privately developed digital surveillance systems deployed.
In the literal wild west, migrants and asylum seekers are viewed generically as threats, and any efforts to stop their movements into the United States are considered fair, according to a vocal segment of local residents and national political populists.
Accountability is lacking in this scenario, the authors conclude. Private companies, sometimes with overt political goals, often withhold what they consider proprietary information including specific capabilities.
Local, state and federal government agencies also are denying the public the kind of information it needs to decide if they want taxes funding the projects. Instead, people are fed hair-raising stories of using iris scanning to find and jail human-trafficking murders.
Those stories are true (and few), but they get disproportionately more attention than the numerically more significant instances of innocent people wrongly identified and off-handedly adjudicated on the basis of biased and fallible algorithms. A two-percent error rate, to use one vendor’s claim, still rebuffs many legitimate asylum seekers when officials become over-reliant on the technology.
And both point to the danger of mission creep. It is common for on-going government programs to expand in scope, budget and impact.
Both pieces ultimately make the case that AI is not yet up to the task of ethically controlling immigration. Profit and political gain, however, will continue to force the issue.
Article Topics
AI | biometric identification | biometrics | border management | border security | criminal ID | facial recognition | iBorderCtrl | iris recognition | lie detection | surveillance
Comments