Nuggets governance framework launches to handle growing liability around AI driven actions

Nuggets is in the business of a universal trust layer, which adds “cryptographic trust where identity stops” with the company building for a present and future of autonomous action.
In other words, it sees a near future of companies having to extend their identity and cloud infrastructure to handle all the autonomous AI agents working both within companies and agent-to-agent systems across the digital space.
Now, Nuggets Labs has cooked up a new Enterprise AI Governance Framework aimed at helping organizations control and audit AI systems. It comes as these AIs don’t only create outputs but also execute actions in production environments.
Nuggets says the framework addresses a growing governance gap. While companies can manage who accesses AI systems, most cannot prove whether an AI‑initiated action was authorized, by whom, or under what constraints.
The vendor‑neutral model is designed for CISOs, CIOs, Chief Risk Officers, Heads of AI and procurement teams responsible for deploying and evaluating enterprise AI systems.
As autonomous agents begin initiating transactions, modifying infrastructure or accessing sensitive records, the company warns that traditional identity and access management (IAM) is no longer sufficient.
The framework introduces “Action Governance” — a new control layer that sits between access and execution. It defines a trust stack of Identity, Authority, Intent and Action along with governance domains and risk classification tiers.
According to Nuggets, existing AI governance models focus heavily on model development and safety, but do not address real‑time authorization of AI‑driven actions. The company argues that enterprises must be able to verify the identity of an AI actor, confirm the authority it was operating under and produce tamper‑resistant audit evidence on demand.
Without this, organizations face mounting regulatory and operational risk as auditors and boards begin asking for proof of AI decision‑making. The framework outlines three steps for adoption: classifying AI deployments by risk tier, assessing gaps in identity, authority and auditability; and prioritizing controls for high‑risk systems before expanding to full policy enforcement and runtime governance.
Nuggets says organizations that implement Action Governance will be better positioned to deploy autonomous AI systems at scale, meet regulatory expectations and demonstrate accountability. Those that do not, it warns, will face a recurring question whenever an AI system acts independently: how do you know it was authorized to do that?
Nugget’s framework also includes a list of 18 procurement questions for evaluating AI systems, which are grouped under categories, which can be found — along with the full brief — here.
Article Topics
AI | AI agents | digital identity | digital trust | Nuggets







Comments