FB pixel

Cybersecurity agencies issue new guidance for adopting agentic AI

Layered defense, robust governance, authentication among necessities for AI agents
Cybersecurity agencies issue new guidance for adopting agentic AI
 

New guidance developed by a group of national cybersecurity agencies aims to lay out “key cyber security challenges and risks associated with the introduction of agentic AI into IT environments, as well as best practices for securing agentic AI systems.”

Authoring agencies include the Australian Signals Directorate’s Australian Cyber Security Centre (ASD’s ACSC), the United States Cybersecurity and Infrastructure Security Agency (CISA) and National Security Agency (NSA), the Canadian Centre for Cyber Security (Cyber Centre), the New Zealand National Cyber Security Centre (NCSC-NZ) and the United Kingdom National Cyber Security Centre (NCSC-UK).

In their report on “careful adoption of agentic AI services,” which focuses on large language model (LLM)-based agentic AI systems, the agencies together “strongly recommend aligning agentic AI risks and mitigation strategies with your organisation’s existing security model and risk posture.” Adopt agentic AI with a security-first mindset, and do not grant it broad or unrestricted access to sensitive data or critical systems; ideally, agents should only be used for low-risk and non-sensitive tasks.

Guidance identifies best practices for design, development, deployment

The guidance lists the security risks that agentic AI poses, and lays out best practices for securing agentic systems.

“Privilege escalation, emergent behaviours, structural dependencies and accountability gaps can interact in unpredictable ways,” it says. “Agentic AI developers, vendors and operators should implement a layered defence and strict access controls to reduce the likelihood of compromise.” Recommandations cover practical design, development and deployment of AI agents, which range from oversight mechanisms to layered defenses to comprehensive testing and evaluation.

As observers of the biometrics sector will know, governance is a key concern for deployment of AI agents. “Autonomous actions by agentic AI systems introduce new risks, requiring updated governance policies and continuous runtime authentication with centralized policy decision points for each action.”

“Ongoing strict privilege management of AI agents is key to long-term security. Lapses here can change the impact of a buggy agent from minor to catastrophic.”

Future-proofing requires collaboration, cool heads

Agentic AI continues to develop, and responsible use is a long game. “Organizations must anticipate and address the new risks these systems introduce,” the report says. “While industry and academia are developing practices to secure agentic AI, the field is still evolving and requires continued research and the practical implementation of agent security to address emerging challenges.”

The authoring agencies recommend security practitioners and researchers expand threat intelligence through collaboration, develop robust and agent-specific evaluations, and “use system-theoretic approaches to analyse agentic AI systems and identify appropriate security measures.”

And, overall, organizations should proceed with caution. “Deploy agentic AI incrementally, beginning with clearly defined low‑risk tasks and continuously assessing it against evolving threat models. Strong governance, explicit accountability, rigorous monitoring and human oversight are not optional safeguards but essential prerequisites.”

“Until security practices, evaluation methods and standards mature, organisations should assume that agentic AI systems may behave unexpectedly and plan deployments accordingly, prioritizing resilience, reversibility and risk containment over efficiency gains.”

In other words, say some of the world’s foremost cybersecurity agencies, take a breather before releasing a horde of autonomous agents into your enterprise IT system. Agentic AI has been evangelized as the future of the moment. But the risk is too high to rush in.

Related Posts

Article Topics

 |   | 

Latest Biometrics News

 

Stop treating identity as a compliance step. It’s infrastructure now

By Harry Varatharasan, Chief Product Officer, ComplyCube The UK governmentʼs digital identity consultation is closing, and for most commentators, this…

 

If you build it, they will leave: experts warn UK gov’t on digital ID approach

The UK Cabinet Office’s consultation on digital identity closed on Tuesday, Digital systems built by governments tend to decline over…

 

Shufti biometric PAD clears iBeta Level 3 with 0 errors across iOS, Android

London-based global identity verification and fraud prevention provider Shufti has passed a Level 3 evaluation of its biometric Presentation Attack…

 

OpenID draft spec for extended identity claims assurance up for approval

Voting is open for approval of a draft specification to extend OpenID Connect to cover new features for requesting and…

 

EES troubles ignite speculation of further suspensions

Crowds, chaos and cranky travelers: The EU’s biometric border management scheme, the Entry-Exit System (EES), continues to fill headlines as…

 

UK Home Office eyes suppliers for SCBP biometrics platform

The Home Office is hosting a preliminary market engagement event to engage with potential suppliers for two not-yet-guaranteed future procurements…

Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Biometric Market Analysis and Buyer's Guides

Most Viewed This Week

Featured Company

Biometrics Insight, Opinion

Digital ID In-Depth

Biometrics White Papers

Biometrics Events