Cybersecurity agencies issue new guidance for adopting agentic AI

New guidance developed by a group of national cybersecurity agencies aims to lay out “key cyber security challenges and risks associated with the introduction of agentic AI into IT environments, as well as best practices for securing agentic AI systems.”
Authoring agencies include the Australian Signals Directorate’s Australian Cyber Security Centre (ASD’s ACSC), the United States Cybersecurity and Infrastructure Security Agency (CISA) and National Security Agency (NSA), the Canadian Centre for Cyber Security (Cyber Centre), the New Zealand National Cyber Security Centre (NCSC-NZ) and the United Kingdom National Cyber Security Centre (NCSC-UK).
In their report on “careful adoption of agentic AI services,” which focuses on large language model (LLM)-based agentic AI systems, the agencies together “strongly recommend aligning agentic AI risks and mitigation strategies with your organisation’s existing security model and risk posture.” Adopt agentic AI with a security-first mindset, and do not grant it broad or unrestricted access to sensitive data or critical systems; ideally, agents should only be used for low-risk and non-sensitive tasks.
Guidance identifies best practices for design, development, deployment
The guidance lists the security risks that agentic AI poses, and lays out best practices for securing agentic systems.
“Privilege escalation, emergent behaviours, structural dependencies and accountability gaps can interact in unpredictable ways,” it says. “Agentic AI developers, vendors and operators should implement a layered defence and strict access controls to reduce the likelihood of compromise.” Recommandations cover practical design, development and deployment of AI agents, which range from oversight mechanisms to layered defenses to comprehensive testing and evaluation.
As observers of the biometrics sector will know, governance is a key concern for deployment of AI agents. “Autonomous actions by agentic AI systems introduce new risks, requiring updated governance policies and continuous runtime authentication with centralized policy decision points for each action.”
“Ongoing strict privilege management of AI agents is key to long-term security. Lapses here can change the impact of a buggy agent from minor to catastrophic.”
Future-proofing requires collaboration, cool heads
Agentic AI continues to develop, and responsible use is a long game. “Organizations must anticipate and address the new risks these systems introduce,” the report says. “While industry and academia are developing practices to secure agentic AI, the field is still evolving and requires continued research and the practical implementation of agent security to address emerging challenges.”
The authoring agencies recommend security practitioners and researchers expand threat intelligence through collaboration, develop robust and agent-specific evaluations, and “use system-theoretic approaches to analyse agentic AI systems and identify appropriate security measures.”
And, overall, organizations should proceed with caution. “Deploy agentic AI incrementally, beginning with clearly defined low‑risk tasks and continuously assessing it against evolving threat models. Strong governance, explicit accountability, rigorous monitoring and human oversight are not optional safeguards but essential prerequisites.”
“Until security practices, evaluation methods and standards mature, organisations should assume that agentic AI systems may behave unexpectedly and plan deployments accordingly, prioritizing resilience, reversibility and risk containment over efficiency gains.”
In other words, say some of the world’s foremost cybersecurity agencies, take a breather before releasing a horde of autonomous agents into your enterprise IT system. Agentic AI has been evangelized as the future of the moment. But the risk is too high to rush in.





Comments