UN panel on digital ID and AI coasts on assumptions about role of AI agents

At an event held during the 80th UN General Assembly, a discussion addressed the question of “Trusted Digital Identity for People & AI,” through the lens of deploying Digital Public Infrastructure (DPI) that is secure and equitable – with the topic of AI agents dominating the conversation.
A blog from the Decentralized Identity Foundation (DIF), which had two members on the panel, says the primary challenge that panelists looked at is “the persistent gap between policy and production.”
“While global goals like SDG 16.9 are clear, the goal of providing legal identity for all by 2030 is often stalled by protocol fragmentation and the lack of a robust architectural model for a world where both people and AI agents are first-class citizens,” it says.
The UN Sustainable Development Goals were not designed with AI agents in mind, nor is it clear on what basis they would claim citizenship. But the current moment proceeds as though AI is inevitable, and as such it must be tabled for discussion.
“Today there’s not really a robust model for AI agents,” says Matt McKinney, CEO of AIGNE, “the agentic ecosystem for AI apps,” and one of the aforementioned DID members. “That’s something a lot of people are working on, but there really is no clear line of sight or clear path in terms of bringing AI agents into our trusted ecosystem.”
No structured framework, but AI agents welcomed into ecosystem anyway
That statement raises the question of whether or not agentic AI – a relatively new entrant into the tech ideasphere – should be part of a robust, trusted digital identity ecosystem at all. Nonetheless, McKinney says that, “as we build identity and as we think about bringing people into a kind of trusted environment, we first need to acknowledge that there are two types of subjects that we’re dealing with. One is people and the second is AI agents that they authorize.”
For all its purported transformational power, however, agentic AI never comes without a warning that we need some way to distinguish who’s real from who’s bot. Humans, McKinney says, “need the ability to safely delegate tasks without actually handing over the keys to their entire entire digital life. And as AI moves closer and closer to us, this is becoming a bigger and bigger issue: how do we actually maintain our personal identity from an agent’s perspective?”
Once again, rather than ask, “should we step back and ask whether a continued convergence of humanity with algorithmic large language models is actually beneficial,” McKinney suggests the trick is in the right settings: you just have to make sure the agents only have the keys they need to unlock the doors you want them to.
He suggests this can be accomplished through controller-bound credentials, “a special ID that permanently links the AI to its owner so we always know who’s accountable,” often achieved with encrypted biometrics; and by ensuring that AI is audible and accountable by implementing “scoped and time box permissions.”
Most important, however, is “having an auditable path to revocation, meaning we log every time the AI uses its key and we have the power to turn off that key at any time.”
AI adds another element to be sold to doubting public
The confidence in models that has seized the identity sector reflects boundless optimism about the potential for mass uptake of digital identity. But adding AI agents to the mix means adding a layer of trust that will need to be sold to the public just like digital identity itself. Given the recent reception to Kier Starmer’s digital ID salvo in the UK, there is already more than enough work to be done before we begin granting citizenship to algorithms.
“The next question is how do we actually build this without taking on a big risk,” McKinney observes. He outlines four key steps. The first is “starting from policy. So, first policy and architecture: before we actually build anything, we sit down and we create the rules.”
That approach is dead on arrival, in that it has already failed: the tech is built, and the rules are not yet written.
“An internet of trust” comes up in the discussion, as does “trust elevation,” both phrases of Nicola Gallo of Nitro Agility. Ken Ebert, CEO of Indicio, steers the conversation toward biometric verifiable credentials, as a defense against AI deepfakes and exploding financial fraud.
So the conversation cycles: AI will do everything for us, but the risks mean we need to control it, and since we can’t yet we need more AI to combat fraud enabled by AI, because AI will do everything for us.
Which is to say, AI agents in the workflow have been sold as a revolution in efficiency. But if every innovation creates a new problem, trust will be elusive – which will only make the sales pitch for digital ID even harder.
The session as described aimed to focus on “turning the UN’s digital identity strategy into a deployable reality, grounded in the core principles of interoperability, privacy by design, and inclusion for all.” Inadvertently, it highlighted the tension between these stated goals and a relentless culture of innovation that will tell us something is indispensable before it even exists.
Article Topics
AI agents | biometric binding | Decentralized Identity Foundation (DIF) | digital identity | digital public infrastructure | identity access management (IAM) | legal identity | non-human identities | SDG 16.9







Comments