FB pixel

iProov warns of ‘accountability vacuum’ with rise of autonomous AI agents

iProov warns of ‘accountability vacuum’ with rise of autonomous AI agents
 

In a world of AI agents whizzing around, the potential infinitude of these agents could wreak havoc. Long-time identity company iProov is sounding a stark warning about the risks of autonomous AI agents as it argues we could be sleepwalking into an “accountability vacuum.”

This is a black hole where high‑impact decisions are made without any verifiable human authorization. The company has shone a light on the issue at RSA Conference 2026, where iProov’s Johan Sellström will demonstrate new methods for cryptographically binding AI‑agent actions to confirm human intent.

iProov has titled research it conducted “The Great Trust Recession.” It sees a parallel to the deepfake‑driven fall in public trust with the threat coming not just from external manipulation but from organizations’ own automated systems acting without meaningful human oversight.

Andrew Bud, iProov’s founder and CEO, argues that the identity infrastructure operating today was never designed for autonomous decision‑making. Even authentication standards such as FIDO2, one‑time passcodes and push notifications assume a human is present.

These systems verify identity and permission, but they cannot verify intent. As Bud puts it: “The entire trust chain begins and ends with a real person,” but AI agents break that assumption.

The U.S. National Cyber Strategy calls for rapid adoption of agentic AI to modernize national‑scale systems, while regulators in Europe and the U.S. are converging on the principle that human oversight must be “meaningful, not ceremonial.” NIST’s recent concept paper on agent identity identifies human‑in‑the‑loop binding as a core requirement.

iProov argues that effective oversight requires three elements: the right human with the authority to make the decision, sufficient context to make a real choice, and an attributable, timestamped record tied to a verified identity. Without these, the company warns, enterprises risk legal, financial and ethical exposure.

At RSA 2026, Sellström will demonstrate how AI‑agent actions can be cryptographically tied to verified human approval. iProov says this kind of binding is essential if organizations want to scale agentic AI safely and avoid the internal trust collapse it predicts.

Agentic AI a big theme for RSA 2026

Agentic AI was a big theme of the RSA Conference 2026, held in San Francisco, with Cisco president and chief product officer Jeetu Patel giving a keynote address on how AI agents are challenging the foundations on which security architecture was built.

The speed and scale of AI erodes the fabric of traditional security models, he said. Cisco’s 2026 Data Privacy Benchmark Study found that 90 percent of organizations have added AI to their privacy programs but that only 12 percent say their AI governance is mature and proactive. Patel believes the rise of AI agents will need a new model for establishing trust, granting access, and maintaining ownership.

Related coverage: Swissbit, RSA, IBM, Auth0,Yubico and Delinea show off identity security tech at RSAC 2026

Related coverage: AI agent identity and next‑gen enterprise authentication prominent at RSAC 2026 

Meta incident shows why humans should be kept in the loop

Mark Zuckerberg’s Meta meanwhile is spending enormous amounts on developing AI, setting aside billions to hire top talent and to invest in AI infrastructure. Yet it has succumbed to the risks of AI when an AI agent, acting without permission, caused a data leak that exposed sensitive details for hours, reports The Information (via AI Magazine).

According to The Information’s reporting, a software engineer at Meta posed a technical query on an internal forum. Another employee used an in-house AI agent to look at the problem. But instead of just providing the analysis, the AI agent posted its response in the forum without the employee telling it to do so.

The software engineer who’d posted the original question then acted on the guidance given by the AI agent. And this is what caused the exposure of sensitive data. The incident illuminates what phrases like “human oversight” and “trust” really mean.

There was no human oversight of what the AI agent did when it posted its answer, while there was also a layer of oversight lacking when the software engineer implemented its advice. They trusted the guidance when full trust of LLMs is not advisable.

The incident highlights the need to keep humans in the loop, an expert told AI Magazine. Artificial assistants must be kept on track, with guard rails in place, to ensure they behave as intended, and its actions and outputs are reviewed, they suggested.

Related Posts

Article Topics

 |   |   |   |   |   |   |   | 

Latest Biometrics News

 

Face biometrics use cases outnumbered only by important considerations

With face biometrics now used regularly in many different sectors and areas of life, stakeholders are asking questions about a…

 

Biometric Update Podcast explores identification at scale using browser fingerprinting

“Browser fingerprinting is this idea that modern browsers are so complex.” So says Valentin Vasilyev, Chief Technology Officer of Fingerprint,…

 

Passkeys now pervasive but passwords persist in enterprise authentication

Passkeys are here; now about those passwords. Specifically, passkeys are now prevalent in the enterprise, the FIDO Alliance says, with…

 

Pornhub returns to UK, but only for iOS users who verify age with Apple

In the UK, “wanker” is not typically a term of endearment. However, the case may be different for Pornhub, which…

 

Europol operated ‘shadow’ IT systems without data safeguards: Report

Europol has operated secret data analysis platforms containing large amounts of personal information, such as identity documents, without the security…

 

EU pushes AI Act deadlines for high-risk systems, including biometrics

The EU has reached a provisional agreement on changes to the AI Act that postpone rules on high-risk AI systems,…

Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Biometric Market Analysis and Buyer's Guides

Most Viewed This Week

Featured Company

Biometrics Insight, Opinion

Digital ID In-Depth

Biometrics White Papers

Biometrics Events