UN sets signposts at good digital ID and ethical use of facial recognition on digital roadmap
Universal digital identity must be “good” and legislation and safeguards must govern facial biometrics to benefit from the technologies without risking significant harms to individual freedom and privacy, according to the United Nations’ newly-released “Roadmap for Digital Cooperation.”
The UN’s High-level Panel on Digital Cooperation composed the report based on a series of roundtable group discussion on advancing the recommendations of the initial report on “The Age of Digital Interdependence.” The report considers digital public goods, digital inclusion, and digital capacity building. Nearly 87 percent of people in developed countries use the internet, compared to less than one in five people in the world’s least developed countries.
The section on digital human rights begins by acknowledging the pre-digital basis of international human rights accords. The report says a risk of protection gaps from the constant evolution of digital technologies requires attention, and addresses four particular areas of concern, starting with data protection and privacy.
The challenge to accessing basic goods and services for the billion people in the world without recognized identification is the starting point for a subsection on digital identity.
“A ‘good’ digital identity that preserves people’s privacy and control over their information can empower them to gain access to these much-needed services,” the report authors write.
Balancing a need for encryption with legitimate law enforcement operations is possible, the report says, but the designing of digital identity systems outside of privacy and data protection frameworks is concerning. The adoption of further safeguards for digital identity, including decentralized data storage and incorporation of “privacy by design” principles is encouraged.
The next subsection addresses “Surveillance technologies, including facial recognition.” The report states that surveillance technologies can serve as effective law enforcement tools, but past breaches of privacy by governments, individuals and the private sector raise the specter of various kinds of potential problems. Arbitrary arrests, denial of peaceful protest rights, entrenchment of bias, and misidentification are presented as possible risks that require mitigation.
The next section, on artificial intelligence, notes potential dangers such as lethal autonomous weapons systems and deepfakes. The section states that a lack of representation and inclusiveness, poor overall coordination, and the inability of the public sector to take advantage of AI are major challenges.
Problems with digital trust and security, from data breaches to cyber warfare, are also examined.
artificial intelligence | biometrics | data protection | data storage | deepfakes | digital identity | digital public goods | ethics | facial recognition | privacy | United Nations | video surveillance