UN cautions govts to safeguard human rights in AI procurement

AI is a major trend of this decade with advancements in the technology having an effect across society, for both ill and good. The negatives include the spread of deepfakes, AI-generated scams and fraud, and mass misinformation.
On the other side, machine-learning-driven algorithms (often called AI) provide conveniences for digital ID and verification systems, helping to accelerate face biometric matching, among other benefits. However, technological innovations are rarely neutral and can often have a darker side.
A new United Nations report draws attention to the procurement and deployment of AI systems and how they should be aligned with the UN Guiding Principles on Business and Human Rights.
“AI systems are transforming our societies, but without proper safeguards, they risk undermining human rights,” said Lyra Jakulevičienė, chair of the UN Working Group on Business and Human Rights, presenting the report to the 59th session of the Human Rights Council.
UN experts outlined the potential adverse effects of AI systems when procured or deployed without adequate human rights due diligence. Groups such as women, children and minorities are particularly at risk, the experts claimed, of impacts such as discrimination, privacy violations and exclusion.
“States must act as responsible regulators, procurers, and deployers of AI,” said Jakulevičienė. “They must set clear red lines on AI systems that are fundamentally incompatible with human rights, such as those used for remote real-time facial recognition, mass surveillance or predictive policing.”
Such human rights concerns are not abstract — Hungary’s planned use of facial recognition for LGBTQ+ surveillance drew a joint statement from 17 EU countries, calling out Budapest for developments that “run contrary to the fundamental values of human dignity, freedom, equality and respect for human rights.”
For states and governments considering digital ID systems, AI is also a hot topic. At the recent ID4Africa 2025, held in Addis Ababa, AI was a term frequently heard. There, at a workshop, UN experts from the UNDP advised governments implementing digital ID systems to make procurement an integral part of the systems’ design from the very beginning. The workshop also explored the challenges and risks of AI in digital ID solutions for procuring institutions.
In the new UN report, the Working Group observed the fragmented nature of the regulatory landscape surrounding AI and human rights. Universal standards are lacking along with agreed definitions and perspectives from the Global South. While binding legislation on AI and human rights is progressing, the experts say exceptions are “broad” and involvement of civil society is “limited.”
The experts emphasized the importance of conducting robust human rights assessments by both public and private actors, to ensure transparency, accountability, oversight and access to remedy. “Businesses cannot outsource their human rights responsibilities,” the experts said. “Businesses must ensure meaningful stakeholder engagement throughout the procurement and deployment processes, especially with those most at risk of harm.”
“We need urgent global cooperation to ensure that AI systems are procured and deployed in ways that uphold human rights, and ensure access to remedy for any AI-related human rights abuses,” Jakulevičienė said.
The Working Group summarized emerging practices by States and businesses, and made recommendations to States, businesses, and other actors on how to incorporate the Guiding Principles on business and human rights into AI procurement and deployment in the report — which can be downloaded here.
Article Topics
best practices | biometrics | digital identity | human rights | procurement | responsible AI | United Nations
Comments