Scramble is on to counter agentic AI gold rush with security, transparency

How safe are the networks of AI agents transforming commerce, work, and identity and authentication systems? Hard to say, says new data from the University of Cambridge – because the people creating them aren’t saying enough.
Investigating the emergent AI ecosystem, researchers found “basic safety disclosure” to be “dangerously lagging,” according to a blog. AI developers, it turns out, “share plenty of data on what these agents can do, while withholding evidence of the safety practices needed to assess any risks posed by AI.”
Of the 30 AI agents listed in the AI Agent Index (a project also involving teams from MIT, Stanford and the Hebrew University of Jerusalem) only four publish agent-specific formal safety and evaluation documents. Twenty-five of the 30 do not disclose internal safety results, while 23 provide no data from third-party testing.
“Many developers tick the AI safety box by focusing on the large language model underneath, while providing little or no disclosure about the safety of the agents built on top,” says Leon Staufer, lead author of Cambridge’s update to the Index. “This transparency asymmetry suggests a weaker form of safety washing.”
The people putting AI agents everywhere, it turns out, aren’t that worried about safety. As such, it pays to take the right precautions.
Agent Checkpoint anchors KYA suite from Vouched
Vouched has launched a verification product to secure agentic commerce. A release from the Seattle-based firm says Agent Checkpoint offers a full suite of agent identification and permissioning tools for website operators.
“Agents are breaking all the prior rules of cybersecurity,” says Peter Horadan, CEO of Vouched. The company says between 0.5 percent and 16 percent of all incoming traffic for its customer base now comes from AI agents. “Every website is realizing they now have AI agents coming through their login button, using the username and password of humans. These sites have no way to tell agents apart from humans, or even to know if a given agent is trustworthy.”
“Agent Checkpoint gives businesses the clarity and confidence to know exactly who, or what, they’re dealing with,” he says.
Anchoring Vouched’s Know Your Agent (KYA) suite, Agent Checkpoint offers secure authentication using OAuth, precise delegation, explicit authorization for legally binding commitments, instant revocation capabilities, and complete audit trails for compliance. The goal is secure, authorized agentic transactions without the scourge of fraud.
“We are moving from a world where humans are the primary consumer to one in which AI agents transact at scale,” says Rosalyn Curato, chief innovation officer and GM of agentic security at Vouched. “Agent Checkpoint sits at the center of this shift by establishing the trust required for organizations to lean into agentic commerce with confidence. Enterprises shouldn’t have to choose between innovation and risk, and now they don’t have to.”
Agentic colleagues need different guidelines: SentinelOne
Cybersecurity firm SentinelOne has released new identity offerings to address the use of agentic AI and non-human identities in the workplace. The firm says AI agents that execute autonomous work create new vulnerabilities and risk surfaces across browsers, endpoints, AI tools and automated workloads.
Jeff Reed, CTO of SentinelOne, says “identity risk no longer begins and ends at authentication, and attackers are increasingly operating within authorized workflows.” While human identity requires continuous user authentication, non-human identity requires continuous “validation of intent through behavior.” The new Singularity Identity platform architecture aims to deliver end-to-end visibility and response across both human and non-human activity. Reed says it transforms identity from a static gate into a “dynamic engine of behavioral assurance.”
NIST AI Agent Standards Initiative sees US jockeying for influence
The AI Agent Index isn’t the only project looking to provide a foundational understanding for the deployment of agentic AI. The National Institute of Standards and Technology (NIST)’s Center for AI Standards and Innovation (CAISI) recently launched the AI Agent Standards Initiative.
According to an opinion piece in ETEdge Insights, the initiative rests on three pillars: “facilitating industry-led development of agent standards and strengthening U.S. leadership in international standards bodies, supporting community-led open-source protocol development for agents, and advancing research on AI agent security and identity to enable trusted adoption across sectors.”
The coverage frames the accelerated work on AI agents as a political push as well as a technical one: “positioning CAISI and NIST at the center of AI agent standardization and coordinating with federal partners like NIST’s Information Technology Laboratory the U.S. is signaling its intent to lead international standards conversations rather than react to them.
The goal is not just domestic trust. It is global influence.”
The claim is underlined with the observation that India has also made recent statements expressing its desire to provide global leadership on AI.
Use AI agents to fight AI agents?
The agentic AI discourse has yet to reach peak “Use AI to Fight AI” – but it’s beginning. A blog post from Dev Kumar, CEO of MojoAuth, makes the case for AI agents as a critical tool for authentication. “By operating as real-time risk engines within OAuth and identity provider architectures,” Kumar says, “they enhance fraud detection, reduce friction, and support continuous session integrity.
“Organizations that integrate AI agents thoughtfully – with strong architectural discipline and governance oversight – will be better positioned to defend against increasingly automated threats.”
Article Topics
AI agents | cybersecurity | digital identity | identity access management (IAM) | identity security | MojoAuth | NIST | SentinelOne | Vouched





Comments