Banning won’t give facial recognition a conscience, but research and informed regulation might
Attempts to tame biometric facial recognition systems by banning them will only result in new problems, according to a global think tank. But a report by the group warns that failing to control the algorithms will pose a grave threat to individual liberty.
The paper, published by the Global Campus of Human Rights, tries to cut a middle path between opponents and proponents, and ultimately decides that a great deal of research needs to be done before governments make broad decisions.
Global Campus is comprised of academics from more than 100 universities in Europe (including southwestern Russia), Africa, Asia-Pacific, Latin America and the Caribbean and the Arab world. U.S. schools are not part of the organization, which is funded by the European Union.
In the background of this issue, the paper’s authors write, is the certainty that individual, organizations and nations already are pushing facial recognition to levels that some feel are unethical if not illegal.
In part because of this, public sentiment in western Democracies has been sliding toward greater regulation up to and including banning the use of face biometrics by governments and businesses. It is refreshing although not unique for a large body of academics to recognize that the technology will not be wished away.
The best way to manage the use of algorithms, at least by governments, is to better understand how they work and how they fail. Their failures often result in unequal treatment of people other than white males.
That is merely a matter of code writing, though, that governments can guide through rewards and regulations, according to the group.
The bigger issues involve dragnet surveillance, which can intimidate people and make them act ways they ordinarily would not. The harm to human rights in widespread facial recognition would outweigh the good coming from surveillance policing.
Better to research methods of narrowing the role of systems to better balance liberties and policing, the report states.
The best example of biometrics deployed beyond any proportion, of course, is part of China’s social credit system. Authors of the report say nations can avoid building such an oppressive system and still give governments legitimate, rational tools.
Research should lead to detailed regulations that leave no gap for a jurisdiction to unnecessarily override human rights, including control of personal data like one’s face and other biometrics, according to the paper.
In fact, this effort can inform or be informed by campaigns to give individuals control over their own privacy and how their data is used.