AI agents operating continuously at machine speed are breaking human-centric IAM

New research commissioned by Ping Identity and compiled by KuppingerCole Analysts shows that “agents are being deployed into production faster than enterprises can govern them, exposing gaps in identity systems designed for human users.”
The report, “From AI Agents to Trusted Digital Workers,” looks at the governance and identity access management (IAM) challenges and critical vulnerabilities facing organizations in an agentic world, wherein systems originally designed for human interaction are being pushed to operate continuously. “Traditional frameworks assume applications with deterministic behavior which does not apply to autonomous agents acting probabilistically across system boundaries,” it says.
Per a release, the research “defines how enterprises can govern AI agents at runtime to close emerging authorization gaps.”
It also describes “a failure mode in which AI agents combine individually legitimate permissions in unintended ways, resulting in actions that bypass established controls and cannot be fully traced or governed.” The report says this represents a new class of identity risk in enterprise systems in which AI agents operate autonomously.
“Autonomous AI agents break core IAM assumptions around human consent, deterministic behavior, and event-level auditability, creating opaque delegation chains and prompt-injection exposure,” it reads. “Agent-to-agent delegation creates permission chains that are difficult to trace or enforce. When an agent acts on behalf of a user and calls a second agent, the resulting authorization context is ambiguous under most current IAM implementations.”
Ping Identity prepared to hold agents accountable
Andre Durand, CEO of Ping Identity, sums up the situation: “Identity remains foundational, but in an agentic environment it must operate continuously. Control must be enforced at the moment an action occurs.”
As such, the research analysis proposes a reference architecture for governing AI agent identities in enterprise environments, consisting of four pillars: identity registration and lifecycle management, multi-tier authorization and access control, governance and oversight, and auditability with provenance. The approach is “grounded in identity, policy-based authorization, governance and oversight, along with accountability, extending identity and zero trust principles to support continuous, runtime authorization and governance.”
Ping Identity’s Identity for AI product is designed to tackle these challenges. The company was recently recognized as an Overall Leader across multiple KuppingerCole Analysts Leadership Compass reports.
DigiCert launches AI Trust architecture for AI agents, models, content
Utah-based DigiCert has introduced a new AI Trust architecture designed to help organizations secure AI systems and their outputs, according to a press release. It is also “unveiling new capabilities to help secure autonomous agents and AI models, along with separate capabilities to provide verifiable content authenticity.”
The AI Trust architecture is a unified trust layer that spans AI agents, models and content, embedding cryptographic verification across the AI lifecycle to validate model integrity and establish content provenance.
“AI has created a new trust challenge,” says Amit Sinha, CEO of DigiCert. “Organizations are relying on agents, models, and content they can’t always verify. At DigiCert, our purpose is to give people confidence in the security, privacy, and authenticity of their digital interactions. With our AI Trust solution, we help organizations confirm what’s real, secure, and approved so AI can be used with confidence.”
DigiCert’s approach is laid out in a recent whitepaper, “The New Trust Architecture for AI.”
VeryAI leverages palm biometrics to bind agents to users
The question of how to govern autonomous AI agents also underpins a new Know Your Agent (KYA) protocol from VeryAI. According to a release, the platform, called ag9, “is the only platform that combines reverse CAPTCHA capability with palm biometric identity verification to prove that a real machine is operating and a real human authorized it.”
VeryAI’s mobile palm scan technology cryptographically binds an agent to a real, verified person using palm biometrics; platforms can then query in real time whether an agent is owned by a verified human, with a response in under two seconds. Meanwhile, ag9’s reverse CAPTCHA function challenges agents to prove they are legitimate, scoped, and operating in good faith.
Says Zach Meltzer, CEO of VeryAI, “a single person can now deploy thousands of agents acting, transacting, and interacting autonomously with zero accountability. The question that platforms need to answer is, ‘Who or what is acting right now, and where does the responsibility lie?’ KYC wasn’t built to answer that. Ag9 is.”
Accenture joins Hedera Council to govern DLT network
Accenture has joined Hedera Council, the governing body of the Hedera public network, a distributed ledger that uses a variant of proof of stake to reach consensus, rather than the proof-of-work consensus mechanisms used by traditional blockchain networks.
A release says the company will contribute to the governance of the Hedera public network, operate a network consensus node, and “work with Hedera and its Council members to support the delivery of trust-based solutions for financial services institutions, government agencies, and large enterprises.”
“The pace of agent-driven automation requires that enterprises reinvent their approaches to trust,” says Bryan Rich, Accenture’s global data and AI lead for health and public service. “The Hedera public network and its unique governance model enables government agencies and enterprises in regulated environments to transact in a transparent and auditable fashion, strengthening compliance with relevant policies.”
Article Topics
Accenture | AI agents | biometric binding | DigiCert | digital identity | identity access management (IAM) | KuppingerCole | palm biometrics | Ping Identity | VeryAI







Comments