AI agents prompt new approaches to identity and access management

What AI agents, their hour come round at last, skitter around the workplace in a widening gyre? And what must be done with the crumbling ruins of legacy identity and access management (IAM) systems, with the barbarians of fraud at the gates?
As agentic AI automates workflows, firms are staring down a decision of epic proportions: stay the course and crumble before the agentic deluge – or be reborn in passionate intensity (and automated algorithmic monitoring and management). A series of investments, product developments and discussions showcases how the issue has everyone waxing agentic.
Fabrix nets funding to advance ‘AI-ready identity fabric graph’
Fabrix Security, identity and access management startup based in Tel Aviv, has emerged from stealth with 8 million dollars in seed funding for its AI-native identity security platform. A release says the seed funding is led by Norwest, Merlin Ventures and Jibe Ventures, and will be used for additional product development as well as sales and marketing.
Fabrix aims to enable enterprises to easily manage and secure both human and non-human identities (such as bots, API keys, service accounts and AI agents), to “enforce least privilege and reduce the attack surface, without compromising business velocity.”
Company CEO Raz Rotenberg says IAM problems have been around for decades, but identity sprawl has made them worse. “Both human and non-human identities are growing exponentially. At this scale, traditional IAM systems that rely on manual processes can’t achieve their main objectives.”
Fabrix incorporates AI agents specifically created and trained to master IAM tasks. Power the release, it uses “agentless connectors to gather identity and permission data, creating an AI-ready identity fabric graph.”
“AI agents then operate on this graph to automate tasks, integrate with IAM workflows, and proactively enforce least privilege access.”
Unlike traditional IAM systems that rely on manual processes, AI-native IAM adapts permissions based on runtime usage and uses large language models (LLMs) to “discover, understand, and optimize every aspect of identity and access across both human and non-human identities.”
Scalekit gets $5.5 million in seed round
Scalekit is also developing IAM systems that have to verify AI agents and their permissions, and has released an authentication stack purpose-built for agentic apps. In tandem, the firm announced a 5.5 million dollar seed round led by Together Fund and Z47, with angel backing from Adam Frankl, Oliver Jay, Jagadeesh Kunda, and others, according to a release.
Scalekit secures both incoming authentication for Model Context Protocol (MCP) servers and outgoing agent actions to third-party tools, such as Gmail, Slack, HubSpot and Notion.
“For years, software focused on blocking bots. Now business apps must let authenticated agents in and decide exactly what data they can read or write,” says Satya Devarakonda, co-founder and CEO. “Scalekit sits at that intersection of verifying every agent’s identity and enforcing precise, least-privilege access through a single drop-in toolkit.”
Ravi Madabhushi, co-founder and CTO, adds that “after scaling auth for 50,000 businesses at Freshworks, we saw the next challenge coming: agent identities that live in code, not in user directories. Scalekit delivers short-lived scoped tokens and plug-in tooling that make agentic workflows secure.”
SecureAuth says call is coming from inside the IAM system
A release from SecureAuth reaches into the identity and access metaphor grab-bag to warn that, in the words of SecureAuth CEO Joseph Dhanapal, “attackers are no longer rattling the doorknob; they’re already inside the lobby before most defenses notice.”
New data in the 2025 Verizon Data Breach Investigations Report shows that generative‑AI tools can automate attacks like credential stuffing, session hijacking and real-time phishing on an unprecedented scale; it cites compromised credentials in 68 percent of breaches.
SecureAuth CPO Brook Lovatt says “organizations can reduce exposure by evaluating device, behavior and network signals throughout every user session, introducing additional verification when risk rises and tying session length to real‑time assurance levels.”
Lovatt’s three guiding principles for safer CIAM in 2025 are “continuous risk scoring that checks context at every step, dynamic just‑in‑time friction that surfaces extra challenges only when risk increases, and session‑aware authorization that adjusts privileges and expiration in real time.”
Look on my code, ye mighty, and upgrade: IAM 3.0 rewrites the book
An article in Identity Fusion takes a different approach to raising the red flag, asking us to spare a moment of silence for IAM 2.0 – “a system built for the age of web logins, badges and human employees as the center of the security universe.” It now belongs beside mainframes, client-server and firewalls in the museum of security measures past.
“Monuments become ruins,” says the author, channeling his inner Shelley. “And IAM 2.0 is already crumbling, because the world it was built for no longer exists.”
We now find ourselves in a world of APIs talking to APIs, bots spawning bots, and “agentic AI weaving decisions across systems with no pause for coffee, no need for rest.”
And what lumbering beast shall rise to stand where IAM 2.0 once stood, pitted now against a horde of tireless algorithmic boots? That would be IAM 3.0, which the author specifies is not a product, but a paradigm shift, grounded in three principles: Autonomous Identity, Contextual Access and Modular, Orchestrated Fabric.
“IAM 3.0 requires a cultural leap: continuous trust, continuous monitoring, continuous response. It’s less like filing paperwork and more like running a security operations center.”
“Polishing the old guard won’t save us. Password resets, MFA widgets, and monolithic platforms can’t hold back a tide of APIs, bots, and AI agents that already outnumber us. IAM 3.0 isn’t a patch. It’s a rewrite.”
Ping Identity wants to help establish foundations of agent trust
Ping Identity also has a new AI product for managing trust in agentic AI. A release says the new AI framework is “designed to close the trust gap created by the rise of AI agents, along with AI-powered assistants that boost administrator productivity.”
The goal, says the company, is to help enterprises reduce risk, maintain oversight and establish the foundations of agent trust, including “verifying identity, managing access and governing agent lifecycles.”
Peter Barker, Chief Product Officer at Ping Identity, says we can no longer implicitly trust what we see, hear, or receive digitally. “As AI becomes more embedded in the enterprise, humans and AI agents must work together seamlessly – with security and verification at the forefront.”
By securing AI agents, simplifying access control, and streamlining workflows, Ping Identity is “establishing identity as the foundation of enterprise trust in the AI era, ensuring innovation can scale without sacrificing security or experience.”
‘You need to know who they are and what they are allowed to do’
The Agentic AI revolution also gets coverage in a panel discussion featuring Itamar Apelblat of Token Security with guest speakers Geoff Cairns of Forrester and Jonathan Jaffe, CISO of Lemonade. The trio discusses what happens when machines act for themselves, and “the urgent need for an identity-first security approach for ensuring a strong agentic AI security posture without impacting innovation and agility.”
“When AI agents are granted credentials, call upon APIs, or operate across cloud environments, you need to know who they are and what they are allowed to do, and be aware if something goes wrong.”
Article Topics
AI agents | biometric authentication | digital trust | Fabrix | funding | identity access management (IAM) | Identity Fusion | Ping Identity | Scalekit | SecureAuth | Token







Comments