FB pixel

Agentic AI breaks zero trust: Here’s how to fix it

Agentic AI breaks zero trust: Here’s how to fix it
 

By Delme Herbert, Technical Product Manager at Strata Identity

It’s Monday morning, and your head of audit is on the line. An AI agent deployed to reconcile invoices has passed every Zero Trust control — valid credentials, authenticated sessions, authorized transactions. Yet the logs tell a different story: it spawned sub-agents, aggregated sensitive data, and left tokens unsecured.

Zero Trust did exactly what it was designed to do, but it wasn’t enough.

This scenario illustrates a hard truth: Zero Trust, as originally defined, is no longer sufficient in a world of AI agents. To remain effective, it must evolve to account for intent, context, and lineage.

Trust first vs. zero trust

Zero Trust is grounded in the principle of “never trust, always verify,” with every request being authenticated, authorized, and continuously validated. Agentic AI systems, however, operate on a different assumption: trust first until proven otherwise.

Agents are typically launched with valid tokens, broad context from Model Context Protocol (MCP) servers, and the freedom to generate sub-agents. Once trusted, their downstream actions may not be evaluated against intent. This creates systemic blind spots because intent can shift rapidly, and agents can scale actions in ways that human-driven policy enforcement never anticipated.

To reconcile this dissonance, Zero Trust must validate not just who is asking for access, but why and in what context. Intent and context are becoming the new basis of trust decisions, expanding security from static identity verification into dynamic behavioral validation.

Auditability in agentic chains

Traditional IAM auditing is built for static identities and discrete sessions. Logs capture who accessed what, when, and whether access was granted or denied. Agentic AI creates a different reality.

Agents spawn sub-agents, exchange context, and execute actions across extended timelines. The result is an agentic chain, a complex sequence of delegations and actions involving multiple entities. Without robust lineage tracking, it becomes impossible to determine whether a sub-agent has exceeded its scope, whether parent agents have delegated responsibly, or whether sensitive data has been exfiltrated.

Real-time observability is now as critical as access control. Enterprises need visibility into intent, parent/child relationships, prompt inputs, and resource usage. Without it, forensic investigation and regulatory compliance are reduced to guesswork.

Limitations of existing identity standards

Current standards such as OAuth, OIDC, RBAC, and ABAC were built for humans and conventional applications. They authenticate users, exchange tokens, and enforce static roles or attributes. But they cannot capture agent intent, lineage, or delegation.

An OAuth token may confirm a client identity and its scopes, but it cannot declare why an agent is acting or what chain of downstream actions will result. Likewise, RBAC and ABAC models break down when behavior emerges dynamically from agent-to-agent interactions.

What’s needed is intent-based access control (IBAC), a model that validates agent goals, checks contextual constraints, and monitors permissible outcomes. This will require updated standards that codify these concepts. Otherwise, enterprises will be forced into fragile, bespoke solutions that will not scale.

Addressing compliance and risk

AI agents introduce regulatory expectations that cannot be ignored. Analysts caution that enterprises deploying agents without governance will likely face costly redesigns in the years ahead. The takeaway is clear: early decisions around agent oversight will have long-term consequences.

But this is also an opportunity. By building frameworks for lineage, observability, and accountability now, organizations can avoid compliance gaps while positioning themselves as leaders in safe and responsible AI adoption.

Regulators will expect more than assurances of Zero Trust compliance. They will want proof that agent actions are subjected to guardrails, decision chains are transparent, and incidents can be reconstructed. Enterprises that move early will not only reduce exposure to risk and penalties but also strengthen trust with customers, auditors, and business partners.

Practical guidelines

Organizations cannot afford to take a “wait and see” approach while standards evolve. The following five actions provide a pragmatic starting point for securing agentic AI today:

  1. Experiment in Controlled Environments – Build and test small-scale agents and MCP servers internally to understand behavior, limitations, and security gaps.
  2. Define Observability Requirements – Identify the telemetry you must capture: agent lineage, intent, delegation chains, and context-specific actions.
  3. Develop Intent Policies – Draft rules that validate not only identity but the goals and permissible outcomes of agent activity.
  4. Engage with Standards Communities – Participate in efforts such as the OpenID Foundation AI and Identity group to shape evolving protocols and avoid isolation.
  5. Prepare for Audits – Design systems that can prove not just who had access, but why, in what context, and with what results.

Zero Trust has served enterprises well in a world of humans and applications. But agentic AI breaks the model. By investing in observability, intent-based policies, and early governance frameworks, enterprises can prepare for a future where AI acts alongside humans, not outside the bounds of trust.

About the author

Delme Herbert is a Technical Product Manager at Strata Identity, where he leads product strategy to help enterprises manage human and AI identities across applications and multi-cloud environments. With over a decade of product leadership experience at companies including Thomson Reuters and BlueCat, he specializes in translating complex infrastructure challenges into solutions that improve security, performance, and business outcomes.

Related Posts

Article Topics

 |   |   |   |   | 

Latest Biometrics News

 

Biometric identity verification and credentials transforming, just in time

Identity verification is changing, with biometrics and credentials converging in a way that allows them to be used in new…

 

Billions Network CEO calls blockchain ‘powerful tool’ for age assurance, privacy

Is blockchain the future of age assurance? It is if you ask Evin McMullin, CEO of Billions Network – “the…

 

Demystify Biometrics introduces toolset to analyze, compare algorithms

A new resource has been launched by Demystify Biometrics to apply its mission of increasing transparency to governments, enterprises, analysts…

 

Mastercard signs MoU to enhance efficiency of Ukraine’s digital infrastructure

Mastercard has signed a five-year public-private cooperation agreement with the government of Ukraine, aiming to boost the country’s digital economy…

 

Aylo continues to argue site-level age verification is ineffective

Aylo, the Canadian company that owns Pornhub and its related network of adult content sites, is not going down without…

 

Draft Trump executive order signals new battle ahead over state AI powers

As Biometric Update has reported would likely happen at some point, the Trump administration has prepared a sweeping new “Deliberative…

Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Biometric Market Analysis

Most Viewed This Week

Featured Company

Biometrics Insight, Opinion

Digital ID In-Depth

Biometrics White Papers

Biometrics Events