FB pixel

AI has static identity verification in its crosshairs. Now what?

AI agents operate autonomously, spin up and disappear in seconds, breaking perpetual login-based trust models
AI has static identity verification in its crosshairs. Now what?
 

By Mike Engle, Co-Founder and Chief Strategy Officer of 1Kosmos

Enterprise identity systems were designed for predictable users and discrete access events, not ephemeral AI agents that autonomously make decisions and perform sensitive transactions. As intelligent systems initiate workflows, call APIs, and move sensitive data without waiting for revalidation, trust is often granted once with no expiration date. That assumption leaves security teams blind during execution, exactly when intent, permissions, and impact must be monitored and governed.

This shift has quietly broken the foundational assumption of enterprise identity: that verification is something you do once, at the start of a session. Unlike humans, AI agents generate massive volumes of actions that security teams must verify, interpret, and trust in real time.

Traditional identity at a crossroads

Here’s a simple example that exposes this security gap. I wanted a small agent to streamline my meeting follow-ups. All it needed to do was read my calendar, access transcripts, and help generate thank-you emails.

It took three weeks of back-and-forth with consultants just to define Microsoft Entra permissions. When we finally tested it, the agent had far more access than anyone expected, including visibility into data it should never have been able to touch.

This wasn’t a technical failure. It was an architectural one.  The permission models for agents are complicated to create and hard to manage.

Identity models based on “joiner–mover–leaver” workflows and static permission assignments cannot keep pace with the fluid and temporary nature of AI agents. These systems assume identities are created carefully, permissions are assigned deliberately, and changes rarely happen. AI changes all of that. An agent can be created, perform sensitive tasks, and terminate within seconds. If your verification model only checks identity at login, you’re leaving the entire session vulnerable.

Don’t scale AI if you can’t verify it

That’s why we need to implement the principle of Verifier’s Law: You should only deploy AI agents at the pace you can verify their output.

If you don’t understand what an agent is doing, cannot confirm it did the right thing, or can’t detect when it starts drifting from expected behavior, then you’re not automating, you’re gambling.

We’ve already seen the consequences. Organizations experimenting with enterprise search agents quickly discover that those agents surface documents the business didn’t even know existed, or didn’t realize were exposed to far too many people. Other teams find that a seemingly harmless coding assistant is quietly calling external APIs or pulling in open source packages that no one vetted.

These agents are not malicious. They’re simply fast, persistent, and extremely literal. If your identity verification and access controls don’t operate at the same speed, small oversights turn into major governance failures.

Static identity is no match for dynamic AI

Authentication, step-up challenges, and even strong biometrics play a critical role in verifying humans, but they do nothing to address four key AI-native identity challenges:

  •     Ephemeral lifespans: Agents pop into existence and vanish in seconds.
  •     Expansive permission requests: Most agents request broad access simply because it’s easier to build them that way.
  •     Opaque decision-making: Even well-designed agents can behave unexpectedly when given ambiguous instructions.
  •     Unbounded scale: One human can spawn hundreds of agents, each with different access and behavior profiles.

Identity verification as a runtime control plane

Securing AI-driven enterprises requires a shift similar to what we saw in the move from traditional firewalls to zero-trust architectures. We didn’t eliminate networks; we elevated policy and verification to operate continuously at runtime. Identity verification for AI must follow the same path. This means building a system that can:

  •     Assign verifiable identities to every human and machine actor
  •     Evaluate permissions dynamically based on context and intent
  •     Enforce least privilege at high velocity
  •     Verify actions, not just entry points
  •     Detect drift or anomalous behavior in real time
  •     Terminate access instantly when conditions change

This is why frameworks like SPIFFE and modern workload identity systems are receiving so much attention. They treat identity as a short-lived, cryptographically verifiable construct that can be created, used, and retired in seconds, exactly the model AI agents require.

Human activity is becoming the minority as autonomous systems that can act faster than we can are being spun up and terminated before governance can keep up. That’s why identity verification must shift from a checkpoint to a real-time trust engine that evaluates every action from every actor, human or AI.

AI agents aren’t just changing how work gets done; they’re redefining identity itself. Verification must evolve with them.

About the author

Mike Engle, Co-Founder and Chief Strategy Officer of 1Kosmos, is a proven information technology executive, company builder, and entrepreneur. He is an expert in information security, business development, authentication, biometric authentication, and product design/development. His career includes the head of information security at Lehman Brothers and co-founder of Bastille Networks.

Related Posts

Article Topics

 |   |   |   |   |   | 

Latest Biometrics News

 

Face biometrics use cases outnumbered only by important considerations

With face biometrics now used regularly in many different sectors and areas of life, stakeholders are asking questions about a…

 

Biometric Update Podcast explores identification at scale using browser fingerprinting

“Browser fingerprinting is this idea that modern browsers are so complex.” So says Valentin Vasilyev, Chief Technology Officer of Fingerprint,…

 

Passkeys now pervasive but passwords persist in enterprise authentication

Passkeys are here; now about those passwords. Specifically, passkeys are now prevalent in the enterprise, the FIDO Alliance says, with…

 

Pornhub returns to UK, but only for iOS users who verify age with Apple

In the UK, “wanker” is not typically a term of endearment. However, the case may be different for Pornhub, which…

 

Europol operated ‘shadow’ IT systems without data safeguards: Report

Europol has operated secret data analysis platforms containing large amounts of personal information, such as identity documents, without the security…

 

EU pushes AI Act deadlines for high-risk systems, including biometrics

The EU has reached a provisional agreement on changes to the AI Act that postpone rules on high-risk AI systems,…

Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Biometric Market Analysis and Buyer's Guides

Most Viewed This Week

Featured Company

Biometrics Insight, Opinion

Digital ID In-Depth

Biometrics White Papers

Biometrics Events