Skip to main content
Intelligence | May 12, 2026 | Microsoft Publishes Five-Level DDoS Resilience Maturity Framework for Consume...

ORIGINAL FRAMEWORKS

The Organizational Agent Controls

Five organizational decisions every enterprise must make before any agent goes live. The platform enforces identity and logging. Only your organization can authorize what the agent is permitted to do.

Platform-enforced access and organizational authorization are not the same control. An agent can be technically permitted by the platform and organizationally unauthorized at the same time. Most governance programs have only closed one of those two gaps.

v1.0  ·  April 2026Sougata Roy, sougataroy.com

Free to read and cite with attribution to Sougata Roy and sougataroy.com. Do not republish, rebrand, or claim authorship of any framework, term, or model as your own.

Agent Identity and Access: The Zero Trust Enforcement Checklist

v1.0
Platform cannot decide · Organization must
01

Is this agent's identity registered?

Every agent must have an explicit identity in Micros…

VERIFY
02

What is its permission scope?

Permissions must be documented at the task level and…

VERIFY
03

When was scope last reviewed?

Review cadence must be defined before deployment and…

VERIFY
04

What data classification boundaries apply?

Microsoft Purview sensitivity labels determine acces…

VERIFY
05

What is the escalation path for anomalous behavior?

Identify who is notified and what actions are taken …

VERIFY

The organizational gap

An agent without a registered identity is not a governed tool. It is an unmonitored actor.

The problem

Why this framework exists

Zero Trust as a security philosophy holds that no user, device, or system should be trusted by default, regardless of whether it is inside or outside the network perimeter. Every access request is verified. Every permission is scoped to the minimum required. No access is assumed to remain valid indefinitely. The same logic applies to AI agents - and most organizations have not applied it.

Inside the organization

The governance question

For each AI agent operating in your environment, can your organization demonstrate that access is granted based on verified identity and explicit authorization, that access scope is limited to what the agent's documented purpose requires, and that a named person with authority can stop the agent without calling the developer first?

Agent Identity and Access:The Zero Trust Enforcement Checklist01Is this agent's identityregistered?Every agent must have an explicit identitybefore deployment.02What is its permissionscope?Permissions must match the documentedintent statement exactly.03When was scope lastreviewed?Review cadence must be definedbefore deployment and enforced on schedule.04What data classificationboundaries apply?Sensitivity labels determine access.Boundaries fail when the agent sees data beyond scope.05What is the escalation pathfor anomalous behavior?Identify who is notifiedand what actions are taken before an incident occurs.An agent without a registered identity is not a governed tool.It is an unmonitored actor.

The framework

The five organizational decisions

The Organizational Agent Controls apply to every AI agent regardless of platform. Each one is a governance decision the deploying organization must make. The platform enforces access control and produces audit logs. It cannot make any of these five decisions on the organization's behalf.

Principle 1: Identity Registration Requirement

Every AI agent operating in the environment must have a verifiable, unique identity that distinguishes it from the human users who interact with it and from every other agent in the environment.

Without separate identity for each agent, audit logs cannot distinguish the agent's actions from the actions of the humans in whose context it operated. An agent that operates under a shared service account, or under the identity of the human user invoking it, creates an audit trail that cannot attribute specific actions to the agent specifically. Actions that cannot be attributed to a specific identity cannot be governed, reviewed, or defended under examination.

What platform provides

Microsoft Entra Agent ID assigns a distinct identity to each registered agent and produces an audit trail attributable to that identity.

What organization must design

The organizational requirement is a policy establishing that every AI agent receives a distinct identity before it is authorized to operate. The policy must define who is responsible for identity assignment, what the identity record must contain, and how agent identities are distinguished from human user identities in the audit infrastructure. There is a bypass risk worth naming explicitly. An agent that borrows a human user's identity, even with that user's knowledge, creates a governance failure regardless of what permissions are technically in place. When a specific action is reviewed after the fact, it is attributed to the human whose identity was used, not to the agent that performed it. Governance of AI agents requires that agent actions be attributable to agents.

Principle 2: Intent-Bound Access Scope

An agent's access to data and systems must be scoped to exactly what its documented business purpose requires. Access granted for convenience or organizational simplicity is not least privilege. It is over-permission, and over-permission in an AI agent is not a theoretical risk. It is the mechanism by which users without appropriate clearance receive data they should not have.

The organizational requirement is a policy establishing that an agent's access scope is derived from its Intent Document - specifically from the authorized actions and data access fields - not from developer discretion or operational convenience. The compliance review conducted before deployment must verify that access scope matches the documented purpose. Any access that exceeds the documented purpose requires explicit justification and re-authorization.

What platform provides

Microsoft Purview sensitivity labels and Microsoft Entra Agent ID permission scopes provide the technical mechanism for enforcing access boundaries. The platform enforces what the organization configures.

What organization must design

The practical test: for each deployed agent, ask whether the agent could still perform its documented business purpose if it had access only to the data and systems named in its authorization record. If yes, any additional access is over-permission. If no, either the authorization record is incomplete or the additional access is genuinely required and must be documented accordingly.

Principle 3: Authorization Expiry Discipline

Authorization is not granted once and assumed to remain valid indefinitely. The conditions that justified an agent's access scope may change without the agent's permissions changing. The relevant question is not whether the agent had permission when it was deployed. It is whether the permission remains appropriate today.

AI agents persist after the organizational conditions that justified their deployment have changed. A compliance-monitoring agent whose scope was appropriate under one regulatory framework may become over-permissioned when that framework changes. An agent whose purpose was defined during a product launch may continue operating after the product is retired. An agent whose accountable owner has departed has no one ensuring its continued appropriateness. None of these conditions automatically triggers a review in any governance infrastructure. They trigger a review only if the organization has documented them as review trigger conditions.

What platform provides

Microsoft Entra Agent ID supports access reviews and credential expiry. These mechanisms must be configured and scheduled by the organization.

What organization must design

The organizational requirement is that each agent's authorization record includes specific trigger conditions that require re-authorization: a change in the agent's data access scope, a change in applicable regulatory requirements, a change in the named accountable owner, and a security incident involving the agent. A review cadence must also be defined - a date by which authorization is reviewed regardless of whether a trigger has occurred. An authorization record without a review date is an artifact, not governance. It was accurate when it was written. There is no organizational mechanism ensuring it remains accurate today.

Principle 4: Isolation and Stop Capability

Every agent must have a documented, tested, and practiced procedure for suspension or termination. That procedure must be executable by someone other than the original developer, and it must work without prior notice to the agent's builder and without access to the original development environment.

An agent that cannot be stopped quickly is an agent whose risk profile depends entirely on its continued correct behavior. When behavior deviates - through an unexpected input, a change in its operating environment, or an adversarial interaction - the organization's ability to contain the impact depends entirely on how quickly the agent can be stopped and how much organizational authority is required to do it.

What platform provides

Copilot Studio and Microsoft Foundry provide suspension and deletion capabilities at the platform level. Microsoft Entra Agent ID supports credential revocation.

What organization must design

The organizational requirement is a documented stop procedure for each deployed agent, specifying the person or role with authority to suspend or terminate the agent, the steps they follow, and evidence that the procedure has been tested under realistic conditions. Tested means an actual exercise in which the procedure was executed and confirmed to work, not a theoretical walkthrough. The practical test: identify the named accountable owner for a deployed agent. Ask them to describe how they would stop the agent if they received a call at midnight indicating it was behaving unexpectedly. If they cannot describe the procedure without calling the original developer first, the isolation capability does not exist.

Principle 5: Explicit Authorization on Record

The organizational decision to deploy an agent must be recorded as a formal decision. Not inferred from the absence of objection, not implied by the fact that deployment occurred, and not assumed from the existence of a technical implementation. An agent that is running is not an agent that has been authorized. It is an agent that has not been stopped.

The most common governance gap in enterprise AI deployments is not that agents were built without anyone's knowledge. It is that the authorization was never recorded as a formal organizational decision. A developer received informal approval in a meeting. A manager nodded when shown a demo. A pilot was never formally transitioned to production through a process that included an authorization step. The agent has been operating for six months and nobody has produced an authorization record because nobody established that one was required.

What platform provides

NIST's February 2026 concept paper on AI agent identity and authorization identifies non-repudiation - the ability to prove that specific agent actions were authorized - as a core governance requirement for AI agents in enterprise environments. Non-repudiation requires that the authorization existed before the action was taken. An authorization record created after a problem is discovered is an incident response artifact. It is not governance evidence.

What organization must design

The organizational requirement is that a completed authorization record is a prerequisite for any agent entering production operation, and that 'production operation' is defined clearly enough that the transition from pilot to production cannot occur without triggering the authorization requirement. An agent used in a business-critical function, regardless of what it is called internally, is in production.

Zero Trust enforcement checklist

Agent Identity and Access: The Zero Trust Enforcement Checklist

An agent without a registered identity is not a governed tool. It is an unmonitored actor.

Is this agent's identity registered?

Every agent must have an explicit identity in Microsoft Entra Agent ID before deployment.

What is its permission scope?

Permissions must be documented at the task level and match the agent's intent statement exactly.

When was scope last reviewed?

Review cadence must be defined before deployment and enforced on schedule rather than assuming permanence.

What data classification boundaries apply?

Microsoft Purview sensitivity labels determine access; boundaries are misconfigured if the agent sees data beyond scope.

What is the escalation path for anomalous behavior?

Identify who is notified and what actions are taken before an incident occurs.

Platform versus organization

What good looks like

Every agent deployed in the environment has a verified identity distinct from human users. Every agent's access scope matches its documented purpose. Every agent has a review date in the future, not the past. Every accountable owner can describe the stop procedure without calling the original developer. Every agent has an explicit authorization record that predates its production operation. When all five conditions are true for all agents, the Agentic Zero Trust posture is in place.

Why it lasts

Why it lasts

The honest version of this standard: very few organizations are there. The value of the model is not that it can be fully satisfied in the near term. It is that it names the five specific conditions that distinguish governed agent deployment from ungoverned agent deployment, and it makes the gap between the two visible and measurable. Zero Trust as a technical architecture has been named, codified, and commercially adopted. What has not been codified is the organizational design layer beneath it - the five decisions a deploying organization must make that no platform, vendor, or security tool can make on its behalf.

Who it is for

Who it is for

The consequence is documented and specific. When users interact with organizational AI agents, authorization is evaluated against the agent's identity, not the requester's identity. A new employee with intentionally limited permissions querying a shared organizational AI agent can receive detailed sensitive customer data they would not have been able to access directly. Nothing is misconfigured. No policy is violated. The agent's access is scoped to its function, not to the authorization level of whoever is asking. The audit log attributes the access to the agent, not to the employee. The exposure occurs without anyone noticing, because the controls designed to prevent it were built for a world where humans access data directly.

FAQ

Questions leaders ask before deployment

Research brief

Research brief

Source: NIST National Cybersecurity Center of Excellence, "Accelerating the Adoption of Software and AI Agent Identity and Authorization," February 5, 2026.

View all frameworks

Continue through the connected framework sequence above.

60-minute operating sprint

Apply this framework in one working session

Use this as a live governance exercise. Leave the session with named evidence, a visible gap, and a next owner rather than another discussion note.

Working session board

One pass through the framework. One evidence trail.

5

Steps

60

Minutes

1

Owner

Live

Decision

01

10 minutes

Verified identity

For each agent in your environment, does it have a Microsoft Entra Agent ID? An agent without a verified identity is operating anonymously in your directory. Count how many do and how many do not.

Output

Written evidence ready for the next governance decision.

02

15 minutes

Least privilege

Pick three agents. For each, what is the broadest permission it holds? Does it need that permission for its stated purpose? An agent with read access to all of SharePoint when its function only requires access to one site library is over-permissioned. Document what you find.

Output

Written evidence ready for the next governance decision.

03

15 minutes

Continuous monitoring

Are agent interactions logged in Microsoft Purview? Is someone reviewing those logs on a defined cadence? What is the alert condition that triggers a human review? If no one has defined the alert condition, monitoring exists but is not active governance.

Output

Written evidence ready for the next governance decision.

04

10 minutes

Isolation capability

For each of the three agents you examined: how long would it take to stop the agent if it behaved anomalously? Who has the authority to stop it? Have you tested that the stop mechanism works? Write the time and the name.

Output

Written evidence ready for the next governance decision.

05

10 minutes

Explicit authorization

For each agent, is there a record showing explicit authorization before deployment? Not a license agreement. An organizational decision record. If not, explicit authorization is missing from your Zero Trust posture.

Output

Written evidence ready for the next governance decision.