Toronto Declaration calls for application of human rights frameworks to machine learning
Human rights organizations have launched a document calling for the framework of international human rights law to be applied to machine learning to address the risks associated with the technology’s applications. “The Toronto Declaration on protecting the rights to equality and non-discrimination in machine learning systems” was authored by Amnesty International and Access Now, and endorsed by Human Rights Watch and the Wikimedia Foundation on its launch at RightsCon 2018.
The document was created with input from a drafting committee made up of human rights activists and academics, and with the hope that it will be adopted by civil society, private sector, and government stakeholders.
“The Toronto Declaration is unique in setting set out tangible and actionable standards for states and the private sectors to uphold the principles of equality and non-discrimination, under binding human rights laws,” added Anna Bacciarelli, Technology and Human Rights Advisor at Amnesty International.
The declaration outlines human rights laws and standards which can be applied to the development of an ethical framework to be applied to machine learning. In addition to the necessity of applying existing human rights law to the technology, the declaration asserts the relevance and importance of the rights to equality and non-discrimination, the obligations of public and private sector organizations to prevent discrimination, and the importance of protecting the rights of all individuals and groups and promoting diversity and inclusion. The importance of diverse inputs to the development of machine learning systems is noted under the latter assertion.
“This declaration — that universal, international human rights law applies also to AI — is critically needed as the debate on ethics and bias in machine learning proceeds. Companies and regulators, must take note,” said Dinah PoKempner, General Counsel at Human Rights Watch.
Significant differences in the accuracy of leading facial recognition systems depending on the race and sex of the person they are being applied to have been established by researchers including Joy Buolamwini of M.I.T. Media Lab, founder of the Algorithmic Justice League. These discrepancies motivated hearings by the U.S. House Oversight and Government Reform Committee’s Subcommittee on Information Technology earlier this year.
Article Topics
Access Now | accuracy | Amnesty International | artificial intelligence | biometrics | ethics | machine learning
Comments