AI used in migrant agencies globally; Can it make sense of a difficult task?
Artificial intelligence has taken root more broadly in international migration issues than many, perhaps, realize. The early adopters among governments see algorithms, some involving biometrics, as a way to decide immigration and asylum matters more efficiently and cheaply.
But like immigration itself, the picture is complex.
From a humanitarian point of view, traditional immigration management is an endeavor that, counterintuitively, can only get less efficient with greater effort to maximize process quality. It is context-dependent and complex, and historically has required small armies of conscientious specialists.
So, replacing expensive and slow human effort with algorithms has its appeal. And there is hope by some that software might become the embodiment of dispassionate, blind Lady Justice.
A new research paper, by Ana Beduschi, associate law professor at the University of Exeter in England, points to several national and regional organizations that have deployed systems, are running pilot programs or are testing software.
In Germany, writes Beduschi, the Federal Office for Migration and Refugees has piloted projects using software for face and dialect recognition, name transliteration and mobile-device analysis, all in the quest to confirm an applicant’s identity and country of origin as part of asylum determinations.
Canadian officials have put artificial intelligence to use in making some decisions about immigration and asylum requests. Beduschi also cites programs in Malaysia, Nepal and Bangladesh to automate migration management.
Sweden is even looking into the possibility that algorithms can predict future migration crises.
The inter-governmental European Union (to which Germany belongs) is preparing to update the information systems of its free-travel Schengen Zone with biometric data including face images and DNA. The goal is to capture and return unwanted migrants.
The United Nations High Commissioner for Refugees is expanding face- and iris-scanning systems through Africa and Asia.
Noble as that organization’s charter is, there are concerns about what has come to be called “surveillance humanitarianism,” the report’s author writes. More and more information is being collected from vulnerable populations by organizations formed to help them. The biometric technology creates a new bureaucratic process to comply with, and to battle with if it makes mistakes.
And the centralized biometric databases certainly are not necessarily better protected from cybercrime than any other organization’s.
It is possible that black-box algorithms could be, as a matter of policy, secretly written to disallow whole classes, religions and nations along with politically inconvenient people, writes Beduschi.
Then there is the fact that algorithms reflect those who write them. In this case, few of the relevant software writers are people of color living in an unsteady nation in the developing world. A mismatch at a refugee camp could erroneously identify an innocent person as a criminal or even terrorist.
Government officials, she writes, need to thoroughly think through their goals for artificial intelligence and the risks of its use.
Article Topics
artificial intelligence | biometrics | border management | facial recognition | humanitarian | immigration
Comments