FB pixel

Startup Cyata nets $8.5M in funding to fight growing threat of AI agents gone bad

Automated bots need oversight to prevent unauthorized cyber-rampages
Categories Access Control  |  Biometrics News  |  Trade Notes
Startup Cyata nets $8.5M in funding to fight growing threat of AI agents gone bad
 

Robot agents gone rogue sounds like the premise of a spy film, but it has become a legitimate security problem, as task-driven agents, copilots and chatbots increasingly operate without oversight across enterprise environments. Although AI agents are being given responsibility for executing code, querying sensitive biometric databases, initiating transactions, and triggering automated workflows, they fall outside traditional identity frameworks, creating a critical security gap.

Cyata is getting ahead of the problem, focusing its mission on managing the influx of bots that behave like digital employees. The Tel Aviv-based cybersecurity startup, founded by alumni of Unit 8200, Cellebrite and Check Point, has launched from stealth with $8.5 million in seed funding led by Israeli firm TLV Partners, with backing from Ron Serber and Yossi Carmil – per a report in Silicon Angle, two former chief executives of digital forensics company Cellebrite, which notoriously once hacked Apple’s iOS operating system.

“AI agents represent the biggest leap in enterprise technology since the cloud,” says Cyata CEO and co-founder Shahar Tal, who calls agentic AI “a self-scaling, sleepless workforce that codes, analyzes and executes in seconds.”

“They act autonomously and at scale, yet no one is watching what they do. Cyata changes that.”

Cyata’s platform features a so-called “control plane for agentic identities,” which automatically  scans the customer’s cloud and software-as-a-service environments and identity management systems to discover AI agents and their permissions, map them to human owners, and continuously assess risk. A forensic layer tracks agent activity, including a feature requiring agents to justify their reasoning in real time. Unauthorized AI agents are locked down to minimize damage.

Tal says unheeded agents can cause havoc, rewriting essential application code, sharing confidential data and even moving money between accounts, and leaving no audit trails behind. Their dynamic nature means they can “spawn instantly, fan out across multiple workflows and carry out autonomous actions without anyone watching.” Hallucinations can push them off course, and they could also be susceptible to hijacking by bad actors.

Tal says Cyata focuses on the actors themselves, not the LLMs controlling them.

“Agents, not models, are the ones making the decisions and triggering risk. We give security teams identity-grade controls specifically for AI agents, so they can unlock their power without losing control.”

TLV Partners’ Brian Sack anticipates “massive demand for a platform such as Cyata’s in the coming years,” and says the firm is “uniquely positioned to define and lead this critical new category before organizations face potentially catastrophic breaches.”

AI agents could ‘break the fraud stack as we know it’

The help couldn’t arrive sooner. A new report from Transmit Security, Blinded by the Agent, says consumer AI agents are defeating traditional fraud detection – and enterprises are unprepared.

“If we don’t act now, the rise of agentic AI will break the fraud stack as we know it,” says Mickey Boodaei, CEO of Transmit Security. “Fraud controls today were built for a world where humans click the buttons. But now, AI is clicking them for us – and the systems can’t tell the difference between AI operated by legitimate users and AI operated by fraudsters.”

The white paper includes some striking stats. Over 60 percent of online traffic to retailers is already bots, not humans, it says – and with AI agents acting on behalf of consumers, that number is expected to surpass 90 percent in the near future. Fraud teams will face 2-3 times more operational workload over the next 12-18 months, just to maintain current protection.

Problematically, “behavioral biometrics fail when there are no human signals – a core flaw in an agent-driven world.”

Financial institutions and online merchants, says the report, are woefully under-prepared for the change.

“This is not just about fraud – it’s about trust,” says David Mahdi, CIO of Transmit Security. “When the AI agent becomes your user’s digital proxy, your systems must adapt. Identity, fraud, and authentication platforms need to be re-architected to recognize and verify intent – not just inputs.”

Researchers warn to avoid getting caught in Identity Mesh

A piece in Security Boulevard underlines the point with a look at a few ways attackers can exploit agentic AI systems to expose sensitive data and conduct malicious activity, “including the execution of arbitrary code and the initiation of potentially harmful actions across disparate applications, systems and services.”

One vulnerability, the “IdentityMesh,” is a flaw in the way agentic AI systems manage identities and context – an architectural weakness that provides an attacker-friendly path for exploiting systems connected via Model Context Protocol (MCP). AI agents “merge identities from multiple MCP-connected systems into a single ‘functional entity,’” enabling threat actors to initiate operations from one MCP-connected system within a group of MCPs.

“IdentityMesh exploits a fundamental weakness in agentic AI,” says Bar Lanyado, lead security researcher at Lasso. “When an AI agent operates across multiple platforms using a unified authentication context, it creates an unintended mesh of identities that collapses security boundaries. It’s the single source of privileges problem.”

This “enables attackers to inject malicious content into external systems that AI agents can access, then leverage the agent’s access across systems to exfiltrate data, phish users for credentials, or distribute malware across environments.”

A second vulnerability noted by API security platform provider Pynt found that “security risks multiply exponentially as organizations deploy multiple MCPs. While a single MCP presents a 9 percent chance of being exploitable, systems with three MCPs face a 52 percent chance of creating high-risk configurations. Organizations using ten MCPs face a 92 percent probability of exploitation.”

Mitigation steps include beefing up MCP security by requiring user approval for all MCP server calls; disabling unused servers and tools, and containerizing MCP servers with system access.

For IdentityMesh prevention, implement context isolation between AI agent operations; deploy runtime monitoring for cross-system behavior; use memory validation mechanisms and implement strict access controls for agent identities.

Bored LLM phones in CAPTCHA to pass as human

If you think CAPTCHA’s “I am not a robot” process is adequate proof of personhood, ChatGPT has bad news for you. Cyber Press reports that “an advanced AI agent has been observed casually clicking through CAPTCHA verification systems designed specifically to exclude non-human users.”

Researchers were surprised to see Chat treat the tests in a very human fashion, which allowed it to fool the system’s behavioral analysis. “Rather than triggering the typical automated detection protocols that flag bot activity, the agent appeared to process the verification request with the same casual indifference displayed by human users completing routine online tasks.”

The agent “demonstrated a sophisticated understanding of expected human response patterns, including subtle delays, natural cursor movements, and appropriate interaction timing that effectively masked its artificial nature.”

In effect, the machine has already learned the old lesson of corporate drudgery: do your job too well, and you might start to look suspicious.

Related Posts

Article Topics

 |   |   |   |   |   |   | 

Latest Biometrics News

 

Podcast: Dr. Sean Kelly says biometrics offer security, efficiency for healthcare

A new survey from Imprivata shows a shocking gap between how healthcare professionals see passwordless authentication, and how healthcare facilities…

 

UNDP showcases how blockchain complements DPI and digital transformation efforts

From Ghana to Georgia, the United Nations Development Programme (UNDP) has implemented blockchain technology into dozens of public systems over…

 

Research into protections against speech analysis privacy threats maturing rapidly

Our voice reveals much more about us than we may realize: The biometric information of our speech contains information about…

 

Scale of AI fraud makes legacy identity verification inadequate

Sometimes, you just have to tell yourself, “I’m good enough.” Then again, if you’re a digital identity security system, you’d…

 

Toss gets lift from biometric retail payments, plans 2026 US IPO

Retail payments with face biometrics are growing in South Korea, and could help lift one of the country’s leading providers…

 

PNG SevisWallet will transform how government issues personal credentials

Papua New Guinea (PNG) has officially made available the SevisWallet digital identity wallet for download, allowing Papuans to use the…

Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Biometric Market Analysis and Buyer's Guides

Most Viewed This Week

Featured Company

Biometrics Insight, Opinion

Digital ID In-Depth

Biometrics White Papers

Biometrics Events