FB pixel

NIST concept paper explores identity and authorization controls for AI agents

NIST concept paper explores identity and authorization controls for AI agents
 

A draft concept paper released by the National Institute of Standards and Technology (NIST) asks industry and government stakeholders how organizations should identify, authenticate and control software and artificial intelligence agents that can access enterprise systems and take actions with limited human supervision.

Published by NIST’s National Cybersecurity Center of Excellence (NCCoE), the paper, Accelerating the Adoption of Software and Artificial Intelligence Agent Identity and Authorization, outlines a proposed project aimed at adapting modern identity and access management frameworks to a new class of digital actors that increasingly operate across enterprise networks.

The paper was written by Ryan Galluzzo, who leads NIST’s digital identity program, Bill Fisher, Harold Booth and Joshua Roberts.

Released as an initial public draft, the paper reflects growing recognition that agentic AI systems capable of gathering information, interacting with tools and executing tasks on behalf of users may require identity governance like that used for human users and traditional software workloads.

The effort reflects growing concern that the rapid emergence of “agentic” AI systems – software capable of making decisions and executing tasks with limited human supervision – is outpacing the security and governance models that traditionally control automated processes.

For more than a decade, organizations have relied on code-based automation to manage cloud workloads, APIs and enterprise workflows. But AI agents represent a different category of software actor.

Unlike conventional automation scripts, these systems can dynamically gather information from multiple sources, reason over that data and take actions that may affect multiple downstream systems. As their capabilities expand, so does the potential impact of mistakes, misuse or compromise.

The NIST concept paper argues that existing identity frameworks must evolve to address this shift. Systems that can autonomously access tools, query databases and execute operations on behalf of users require clear mechanisms for identification, authentication and authorization.

Without those controls, AI agents could effectively become privileged actors operating across enterprise networks with unclear accountability.

The NCCoE proposal centers on a straightforward but consequential premise: AI agents should be treated as identifiable entities within enterprise identity systems rather than as anonymous automation running under shared credentials.

A future NCCoE demonstration project would explore how existing identity standards and best practices can be applied to these systems so organizations can securely deploy agentic AI technologies while managing risk.

Among the questions the agency is asking stakeholders to address are how AI agents should be identified within enterprise architectures, what metadata should define their identities and whether those identities should be persistent or dynamically tied to specific tasks.

The concept paper also raises technical issues related to authentication, including how credentials for AI agents should be issued, updated and revoked. As with human users, compromised credentials or poorly managed authentication mechanisms could allow malicious actors to hijack agent capabilities.

Authorization presents another set of challenges. AI agents may need access to multiple data sources and enterprise tools to complete tasks, yet their behavior may evolve as they interact with systems and gather new information.

That dynamic nature complicates the principle of least privilege, a cornerstone of cybersecurity that limits access rights to only what is necessary for a specific task.

The paper asks whether authorization policies for AI agents should be able to adapt in real time as an agent’s operational context changes.

The paper also highlights several emerging security risks tied to the deployment of AI agents. One of them is prompt injection, a technique in which adversaries manipulate the inputs provided to an AI system to influence its behavior.

If an AI agent can access enterprise resources or trigger operational actions, a successful prompt injection attack could cause the system to retrieve sensitive data or execute unintended commands.

Another concern is accountability. Autonomous agents may carry out actions on behalf of human users or organizations, raising questions about how responsibility should be assigned if those actions cause harm.

To address this, the NCCoE project would examine mechanisms for logging and auditing agent activity. Such systems could ensure that actions taken by an AI agent can be traced back to the nonhuman identity that performed them and ultimately to the human authority responsible for delegating those permissions.

Rather than developing entirely new frameworks, the NIST initiative focuses on adapting existing identity and access management standards to the emerging agent ecosystem.

The concept paper identifies several technologies that could play a role in managing agent identities and permissions. These include OAuth and OpenID Connect, widely used authentication and authorization protocols, along with identity lifecycle management tools such as the System for Cross-domain Identity Management.

The proposal also references frameworks such as the Secure Production Identity Framework for Everyone and its implementation environment SPIRE, which provide cryptographic identities for software workloads operating in distributed systems.

Policy enforcement could draw on attribute-based access control systems such as Next Generation Access Control, which enables fine-grained authorization decisions across complex environments.

These tools would be implemented alongside existing NIST cybersecurity guidance, including the agency’s Zero Trust Architecture model and digital identity guidelines.

NIST’s potential project would focus primarily on enterprise deployments where organizations maintain visibility and control over the agents operating in their systems.

Several use cases highlighted in the concept paper illustrate how AI agents could be integrated into everyday operations.

One involves productivity-focused agents that assist employees with tasks such as managing schedules, drafting policy documents or generating recommendations.

Another use case focuses on security-oriented agents that analyze cybersecurity data and recommend or execute defensive actions.

A third potential use case involves software development and deployment pipelines, where AI agents may automate elements of coding, testing and release management.

In each of these scenarios, agents require access to sensitive data and enterprise systems, making robust identity and authorization controls essential.

If the project moves forward, the NCCoE intends to develop a practical implementation guide demonstrating how organizations can deploy AI agents while maintaining secure identity governance.

Such guidance would be built using commercially available technologies in NCCoE laboratories and would document real-world implementation approaches along with lessons learned.

The goal is to help organizations adopt agentic AI capabilities without sacrificing security or accountability.

The initiative forms part of a broader push by NIST to address the governance challenges posed by autonomous AI systems. Through research, standards development and collaborative projects with industry, the agency is seeking to establish the technical foundations necessary for what many expect to be the next major phase of AI deployment.

Comments on the paper are due on April 2.

Related Posts

Article Topics

 |   |   |   |   |   |   | 

Latest Biometrics News

 

SITA urges digital identity, AI coordination as aviation faces ‘significant pressures’

SITA’s most recent report mentions the elephant in the room regarding the industry, as the conflict in the Middle East…

 

U.S. bill would mandate operating system-level age verification

A bipartisan House bill introduced this week, HR 8250, would require operating system providers to verify the age of every…

 

NADRA Technologies Limited partners on biometric onboarding, IDV platform

NADRA Technologies Limited (NTL), the commercial arm of Pakistan’s National Database and Registration Authority (NADRA), has signed a memorandum of…

 

AI voice fraud draws new congressional scrutiny

U.S. Sen. Maggie Hassan is escalating congressional scrutiny of the fast-growing AI voice-cloning industry, pressing four major companies to explain…

 

Nearly 40% of Gen Z report fraud losses as scams shift online: TransUnion

Gen Z is increasingly being targeted by online scammers: Nearly 40 percent of Gen Z consumers reported losing money to…

 

Vietnam mandates face biometrics for mobile device registration

A facial recognition process is now required for new mobile device registrations in Vietnam. The policy took effect April 15…

Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Biometric Market Analysis and Buyer's Guides

Most Viewed This Week

Featured Company

Biometrics Insight, Opinion

Digital ID In-Depth

Biometrics White Papers

Biometrics Events