FB pixel

AI has static identity verification in its crosshairs. Now what?

AI agents operate autonomously, spin up and disappear in seconds, breaking perpetual login-based trust models
AI has static identity verification in its crosshairs. Now what?
 

By Mike Engle, Co-Founder and Chief Strategy Officer of 1Kosmos

Enterprise identity systems were designed for predictable users and discrete access events, not ephemeral AI agents that autonomously make decisions and perform sensitive transactions. As intelligent systems initiate workflows, call APIs, and move sensitive data without waiting for revalidation, trust is often granted once with no expiration date. That assumption leaves security teams blind during execution, exactly when intent, permissions, and impact must be monitored and governed.

This shift has quietly broken the foundational assumption of enterprise identity: that verification is something you do once, at the start of a session. Unlike humans, AI agents generate massive volumes of actions that security teams must verify, interpret, and trust in real time.

Traditional identity at a crossroads

Here’s a simple example that exposes this security gap. I wanted a small agent to streamline my meeting follow-ups. All it needed to do was read my calendar, access transcripts, and help generate thank-you emails.

It took three weeks of back-and-forth with consultants just to define Microsoft Entra permissions. When we finally tested it, the agent had far more access than anyone expected, including visibility into data it should never have been able to touch.

This wasn’t a technical failure. It was an architectural one.  The permission models for agents are complicated to create and hard to manage.

Identity models based on “joiner–mover–leaver” workflows and static permission assignments cannot keep pace with the fluid and temporary nature of AI agents. These systems assume identities are created carefully, permissions are assigned deliberately, and changes rarely happen. AI changes all of that. An agent can be created, perform sensitive tasks, and terminate within seconds. If your verification model only checks identity at login, you’re leaving the entire session vulnerable.

Don’t scale AI if you can’t verify it

That’s why we need to implement the principle of Verifier’s Law: You should only deploy AI agents at the pace you can verify their output.

If you don’t understand what an agent is doing, cannot confirm it did the right thing, or can’t detect when it starts drifting from expected behavior, then you’re not automating, you’re gambling.

We’ve already seen the consequences. Organizations experimenting with enterprise search agents quickly discover that those agents surface documents the business didn’t even know existed, or didn’t realize were exposed to far too many people. Other teams find that a seemingly harmless coding assistant is quietly calling external APIs or pulling in open source packages that no one vetted.

These agents are not malicious. They’re simply fast, persistent, and extremely literal. If your identity verification and access controls don’t operate at the same speed, small oversights turn into major governance failures.

Static identity is no match for dynamic AI

Authentication, step-up challenges, and even strong biometrics play a critical role in verifying humans, but they do nothing to address four key AI-native identity challenges:

  •     Ephemeral lifespans: Agents pop into existence and vanish in seconds.
  •     Expansive permission requests: Most agents request broad access simply because it’s easier to build them that way.
  •     Opaque decision-making: Even well-designed agents can behave unexpectedly when given ambiguous instructions.
  •     Unbounded scale: One human can spawn hundreds of agents, each with different access and behavior profiles.

Identity verification as a runtime control plane

Securing AI-driven enterprises requires a shift similar to what we saw in the move from traditional firewalls to zero-trust architectures. We didn’t eliminate networks; we elevated policy and verification to operate continuously at runtime. Identity verification for AI must follow the same path. This means building a system that can:

  •     Assign verifiable identities to every human and machine actor
  •     Evaluate permissions dynamically based on context and intent
  •     Enforce least privilege at high velocity
  •     Verify actions, not just entry points
  •     Detect drift or anomalous behavior in real time
  •     Terminate access instantly when conditions change

This is why frameworks like SPIFFE and modern workload identity systems are receiving so much attention. They treat identity as a short-lived, cryptographically verifiable construct that can be created, used, and retired in seconds, exactly the model AI agents require.

Human activity is becoming the minority as autonomous systems that can act faster than we can are being spun up and terminated before governance can keep up. That’s why identity verification must shift from a checkpoint to a real-time trust engine that evaluates every action from every actor, human or AI.

AI agents aren’t just changing how work gets done; they’re redefining identity itself. Verification must evolve with them.

About the author

Mike Engle, Co-Founder and Chief Strategy Officer of 1Kosmos, is a proven information technology executive, company builder, and entrepreneur. He is an expert in information security, business development, authentication, biometric authentication, and product design/development. His career includes the head of information security at Lehman Brothers and co-founder of Bastille Networks.

Related Posts

Article Topics

 |   |   |   |   |   | 

Latest Biometrics News

 

Photo ID, proof of citizenship take center stage in US voting fight

The Safeguard American Voter Eligibility Act (SAVE) has become the centerpiece of a renewed congressional fight over who sets the…

 

AI fakery is turning fear into a voter suppression tool ahead of US elections

In the months leading up to the 2026 midterm elections which could see Democrats sweeping both the House and Senate,…

 

Alcatraz partners with gun violence group on school, workplace safety

Alcatraz has joined the Active Shooter Prevention Project (ASPP), a U.S.-based initiative that develops strategies to reduce risks in schools,…

 

V-Key gets PE firm backing to expand mobile digital identity security footprint

Singapore-headquartered digital identity and Mobile Application Protection and Security (MAPS) provider V-Key has a new majority investor, with Tower Capital…

 

IDfy secures $52M to pursue digital ID trust services ambitions

Digital ID verification firm IDfy has obtained funding of 476 crore Indian rupees, approximately US$52 million, to pursue its digital…

 

WSO2 to help MOSIP’s passwordless authentication platform eSignet Go Thunder

IIIT-Bangalore, home to India’s burgeoning digital public goods efforts, has formed a partnership through the MOSIP initiative it hosts with…

Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Biometric Market Analysis and Buyer's Guides

Most Viewed This Week

Featured Company

Biometrics Insight, Opinion

Digital ID In-Depth

Biometrics White Papers

Biometrics Events