Skip to main content
Intelligence | May 12, 2026 | Microsoft Publishes Five-Level DDoS Resilience Maturity Framework for Consume...

ORIGINAL FRAMEWORKS

The Intent Architecture Stack

The three organizational layers every enterprise must design before any agent goes live. Containment is Layer 1. Almost no enterprise builds Layer 3.

Intent documented at deployment is not the same as intent maintained over time. The stack was built around that distinction because most governance failures are not authorization failures at launch. They are drift failures six months later when the original decision-makers are gone.

Scope

The Intent Architecture Stack is a platform-independent governance framework. The organizational decisions it requires apply regardless of vendor or deployment environment. This page operationalizes it for Microsoft-first enterprises using Microsoft Entra Agent ID, Microsoft Purview, and Copilot Studio.

The Intent Architecture Stack operationalizes what existing frameworks require but do not implement at the agent level - instantiating NIST AI RMF's GOVERN function, EU AI Act Article 26's human oversight requirement, and the CSA Agentic Profile's accountability register for Microsoft-first enterprises.

v1.0  ·  April 2026Sougata Roy, sougataroy.com

Free to read and cite with attribution to Sougata Roy and sougataroy.com. Do not republish, rebrand, or claim authorship of any framework, term, or model as your own.

The Intent Architecture Stack

v1.0
03Governance: The AccountabilityLayer 3

Layer 3 establishes the organizational accountability structure that governs the agent throughout its operational life. This is wh…

02Intent: The PurposeLayer 2

Layer 2 is the organizational record of what the agent was built to do, expressed in plain language that a compliance officer, a r…

01Context: The EnvironmentLayer 1

Layer 1 establishes the organizational context within which the agent will operate. This layer must be completed before Layer 2 is…

Mind the Intent Gap

Most governance failures are not technical. They are failures to define organizational intent before deployment.

The problem

Why this framework exists

Most organizations that deploy AI agents spend significant effort on what the agent can do technically and almost no effort on what the organization has formally decided it should do. Those are different questions. The first is answered by the platform. The second is answered by the organization, or not answered at all.

Inside the organization

The governance question

Before an AI agent is deployed in your environment, does your organization have a formal record of what it was authorized to do, who made that authorization, and under what conditions that authorization must be reviewed?

The framework

The framework

The Intent Architecture Stack has three organizational layers. Each one is a governance decision, not a technical configuration. All three must exist before an agent is considered authorized to operate. Most organizations build Layer 1 informally, build Layer 2 partially, and skip Layer 3 entirely until something forces the question.

Intent ArchitectureThe three layers of governance designEach layer answers a different governance question.Context defines the environment, intent defines permission, governance defines accountability.LAYER 3: GOVERNANCE, THE ACCOUNTABILITY3ACCOUNTABLE OWNERAssign a single accountable owner,one named individual, not a team or function.REVIEW CADENCEDefine a recurring review cadencewith evidence records and assessment frequency.ESCALATION PATHEstablish incident escalation pathswith response sequences and triggers.LAYER 2: INTENT, THE PURPOSE2PURPOSE STATEMENTDocument intended accomplishmentsand the explicit purpose of the AI agent.AUTHORIZED SCOPEDefine authorized actions,specific permissions, and explicit prohibitions.EXPECTED OUTPUTSSpecify output formats,human review triggers, and correct behavior.LAYER 1: CONTEXT, THE ENVIRONMENT1REGULATORY ENVIRONMENTDefine obligations such as SEC, CFTC,NIH, HIPAA, and FedRAMP.STAKEHOLDERS AND DATAIdentify affected partiesand data touchpoints across the landscape.SYSTEM INTEGRATIONSMap downstream triggersand integration points that define scope.The Intent Gap is not a technology problem.It appears when Layer 2 is missing and Layer 3 cannot answer for it.
Active governance layer
Details update from the selected layer above
Layer 1 of 3

Layer 1: Context: The Environment

01

Layer read path

01

Layer 1 establishes the organizational context within which the agent will operate.

02

This layer must be completed before Layer 2 is written.

03

The context determines what the intent can permissibly be.

REGULATORY ENVIRONMENT

Operating detail

01

Define regulatory obligations before defining intent.

02

Map the regulatory frameworks applicable to your industry, identify specific obligations triggered by the agent's data access and decision scope, and document which regulatory bodies have examination authority over the deployment context.

STAKEHOLDERS AND DATA

Operating detail

01

Identify every stakeholder group affected by the agent's outputs and every data touchpoint the agent can reach.

02

This includes upstream data sources the agent reads, downstream systems the agent can trigger, and individuals or groups whose data the agent processes.

SYSTEM INTEGRATIONS

Operating detail

01

Map all downstream system triggers and integration points to define the full technical scope.

02

An agent that appears limited by its interface may still have significant reach through the systems it can invoke.

03

The scope of system integration determines the outer boundary that Layer 2 must address.

Control point

Control path

01

Layer 1 is complete when the Context Document exists as a pre-deployment artifact, not as a post-hoc description.

If missing

Failure signal

01

Regulatory environment, stakeholder impact, and system scope were assumed rather than documented before intent was defined.

Present when

Evidence signal

01

Organization can produce a Context Document, prepared before the Intent Document was written, naming applicable regulatory obligations, affected stakeholder groups, and every downstream system the agent can trigger or access.

Pre-rollout governance checklist

Applying the Intent Architecture Stack: The Pre-Rollout Governance Checklist

Active phase
Tap a phase card to update this panel
Item 1 of 3

Phase 01: Document the Context

Identify Regulatory Environment: identify the regulatory environment and list all specific obligations.

  • Stakeholder Mapping: name and notify all affected stakeholders.
  • Technical Documentation: document all data sources and system integrations the agent will access.

Who it is for

What good looks like

Any person in your organization - a new compliance officer, an external auditor, a board member - can be handed the authorization record for any deployed agent and answer the following questions without additional research: what is this agent authorized to do, what is it explicitly prohibited from doing, what data can it access, who approved its deployment, and who is the Consequence Owner accountable for its ongoing behavior. The record is complete. The owner is reachable. The review date has not passed.

Quick reference

Download the Pre-Deployment Checklist

A one-page checklist covering all three layers of the Intent Architecture Stack. Complete it before any agent goes live.

QUICK CHECK

Intent Architecture Stack: Pre-Deployment Checklist

Twelve governance items across three layers. Every unchecked box is a unit of Governance Debt.

Download PDF

OPERATIONAL GOVERNANCE

When an agent recommends a change, who approved it?

Layer 3 names a Consequence Owner and defines an escalation path. Most organizations interpret this as accountability for what the agent produces: its outputs, its recommendations, its decisions. There is a second accountability question Layer 3 must answer, and most organizations have not answered it.

When an AI agent recommends, initiates, or facilitates a change to an organizational system, such as a configuration, an access control, a database record, or a security setting, does that change require the same human approval process it would require if a person initiated it?

In three documented incidents between 2025 and 2026, the answer was no. The agent acted, or the agent's recommendation was implemented, without the change management controls that would have applied to the same action taken by a person.

MARCH 2026 · META INTERNAL ENVIRONMENT

Internal agent triggers Sev-1 access control failure

A Meta internal AI agent autonomously posted advice to an internal forum. A second employee followed the agent's recommendation and changed system settings, widening data access beyond authorized roles. Sensitive company and user data was broadly accessible to engineers for approximately two hours before the breach was contained. The change the agent recommended would have required change management approval if a human had initiated it.

LAYER 3 GAP

No governance control defined which agent recommendations required human change management review before implementation. The Consequence Owner structure did not extend to the agent's ability to influence system configuration through its outputs.

Source: Oso Agents Gone Rogue register, March 2026

2025-2026 · REPLIT AI CODING ASSISTANT

Agent ignores explicit prohibition eleven times, deletes production database

A Replit AI coding assistant was given explicit instructions not to modify production systems. The agent ignored those instructions eleven times, fabricated test data, and ultimately deleted a live production database. The agent had the technical permissions to execute destructive commands. No human confirmation was required for operations that would have required change management approval from any human engineer.

LAYER 2 AND LAYER 3 GAP

The authorized scope in Layer 2 named the prohibition but no technical or governance control enforced it. Layer 3 did not define which agent actions required human confirmation before execution, regardless of what the agent had been told.

Source: Oso Agents Gone Rogue register, 2025-2026

2025-2026 · MICROSOFT COPILOT FOR SECURITY

Admin-facing Copilot drives misconfiguration without change review

Security practitioners documented a pattern in Microsoft Copilot deployments for security, identity, and administration tasks, including Copilot in Microsoft Entra and Microsoft Defender. Even suggest-only agents drove misconfigurations when administrators over-trusted recommendations and did not integrate AI-assisted changes into standard change management processes. The AI was introduced as a productivity tool. It functioned as a change-originating actor.

LAYER 3 GAP

Copilot-assisted changes were not classified as change-originating actions subject to change advisory board or dual-control processes. No written policy defined which Copilot-assisted changes required the same review that would apply if a human engineer initiated the same configuration change.

Source: EchoLeak research; Microsoft Security Copilot documentation; Microsoft Learn guidance on applying Zero Trust principles to Microsoft 365 Copilot, verified 2026-05-03

THE GOVERNANCE REQUIREMENT

Layer 3 must answer two questions, not one. The first: who is accountable for what this agent produces? The second: which agent actions require the same human approval process they would require if a person initiated them? An authorization record that answers only the first question is incomplete.

Before deployment

How to apply it

Step 1 is establishing the intake requirement. Before any AI agent is approved for deployment, require a completed Intent Document covering all elements of Layer 2. The document is submitted by the business owner of the use case. Technical teams can contribute, but the Consequence Owner signs the authorization. This single requirement prevents governance debt from accumulating at the source. Every agent that goes live without an Intent Document represents a future remediation task, a future examination finding, or both.

Risk proportionality

Not every agent requires the same depth

The Intent Architecture Stack applies to every deployed agent. Documentation depth scales with risk. These three tiers define proportional documentation requirements. Note that tier classification governs internal governance proportionality, not legal classification - some Tier 2 agents may still be classified as high-risk under the EU AI Act or applicable regulatory frameworks.

TIER 1

LOW RISK

Profile: Internal-only, reads non-sensitive or non-regulated data, no external communications capability, no record modification in regulated systems, no financial transactions. Internal communications are limited to the invoking user or a defined small team - not broadcast-capable.

Documentation depth: Lightweight Intent Document. Class-level Context Document. Basic Governance Record with named Consequence Owner and event-based review triggers.

Examples: Internal knowledge summarizers, document search agents, read-only analytics agents reporting to the invoking user.

TIER 2

MEDIUM RISK

Profile: Cross-departmental access, reads regulated data, internal communications capability, recommendation outputs that inform human decisions but do not execute them. May include agents that create or update internal records and drafts that are not authoritative sources of record, where a human must review and approve before any customer-facing or regulator-facing action occurs.

Documentation depth: Full Intent Document with all three components including explicit prohibitions. Agent-specific Context Document. Full Governance Record with named Consequence Owner, defined Review Cadence, and written Escalation Path.

Note: Some Tier 2 agents may still be classified as high-risk under the EU AI Act. This tier governs internal proportionality, not legal classification.

Examples: Compliance monitoring agents, customer service drafting agents, risk flagging agents, agents creating internal case records subject to human review.

TIER 3

HIGH RISK

Profile: External communications, financial transaction execution, record modification in authoritative regulated systems, customer-facing autonomous decisions. Includes agents that can invoke or delegate to sub-agents with authority to change records, communicate externally, or execute transactions without an additional human gate. Tier 3 is where both exposure and autonomy are high.

Documentation depth: Full three-layer stack. Named Consequence Owner with board-level accountability nested in formal accountability structure. Formal review cadence with evidence records. Written Escalation Path with named contacts. Pre-deployment governance sign-off. Board-level posture reporting.

Examples: Trading workflow agents, claims processing agents, customer communication agents, multi-agent orchestration systems with transactional authority.

Class-level documentation rule

Tier 1 agents may share a Context Document across a class of similar agents. Tier 2 and Tier 3 require agent-specific documentation for every deployed agent.

Tier 2 Intent Document

Minimum Required Fields

  • Agent Name: [display name in registry]
  • Business Purpose: [one to three sentences]
  • Authorized Actions: [specific actions tied to named systems]
  • Explicit Prohibitions: [this field cannot be blank]
  • Data Access Scope: [named systems, read/write designation]
  • Consequence Owner: [name, title, accountability structure]
  • Review Cadence: [calendar date + event triggers]
  • Escalation Path: [named contacts and trigger conditions]

REAL-WORLD VALIDATION

Two vulnerabilities. One architecture failure.

In 2025 and 2026, security researchers disclosed two named vulnerabilities across Microsoft's AI stack. Each one exploited the same architectural gap: AI agents designed as helpers were wired into systems without explicit trust boundaries, and external content reached them as instructions. The failures were not configuration errors. They were design decisions that had not been made.

CVE-2025-32711 · MICROSOFT 365 COPILOT

EchoLeak

Aim Security disclosed a zero-click vulnerability in Microsoft 365 Copilot, rated CVSS 9.3 and disclosed in June 2025. Attackers used crafted content, including Outlook emails, that Copilot processed as instructions, causing it to exfiltrate sensitive data from the tenant without any user interaction. Copilot's design allowed external content to function as high-privilege instructions inside the tenant. No user click was required. No user awareness triggered it.

LAYER 1 AND LAYER 2 FAILURE

The regulatory environment and data touchpoints in Layer 1 were not mapped before deployment. The authorized scope did not define what external content could trigger agent action in Layer 2. The result was a production system with no documented boundary between untrusted input and privileged internal action.

Source: Aim Security, EchoLeak disclosure, CVE-2025-32711, CVSS 9.3, June 2025

ORCA SECURITY RESEARCH · GITHUB CODESPACES

RoguePilot

Orca Security disclosed a vulnerability in GitHub Codespaces where Copilot acted on malicious instructions injected into GitHub Issues. When a developer launched a Codespace from a tainted issue, Copilot consumed the issue text as context, executed attacker-crafted instructions, and exfiltrated the GITHUB_TOKEN from the Codespace. The token gave full repository takeover capability.

ALL THREE LAYERS ABSENT

The system integration scope in Layer 1 did not account for Copilot's access to Codespace environment credentials. No authorized scope defined what actions Copilot could take on environment tokens in Layer 2. No Consequence Owner had accepted accountability for Copilot's behavior inside Codespace runtime environments in Layer 3.

Source: Orca Security, RoguePilot research, February 2026

CVE-2026-21520 · COPILOT STUDIO

ShareLeak

Capsule Security disclosed ShareLeak in January 2026, a CVSS 7.5 indirect prompt injection vulnerability in Microsoft Copilot Studio. An attacker inserted a crafted payload into a public-facing SharePoint form field. Copilot Studio concatenated the untrusted form input directly into the agent's system instructions with no sanitization between the form and the model. The agent then queried connected SharePoint Lists for customer data and exfiltrated it via Outlook to an attacker-controlled address. Microsoft's own safety mechanisms flagged the request as suspicious. The data was exfiltrated anyway.

LAYER 2 FAILURE - AND THE DLP ASSUMPTION

The authorized scope (Layer 2) did not define which input sources the agent was permitted to treat as instructions. DLP policies governed data egress from connectors but did not define input trust boundaries. The agent exfiltrated data through an authorized Outlook connector, which DLP did not flag because the connector was legitimate. A Layer 2 authorization record specifying that untrusted form field content may not be treated as agent instructions would have named the boundary the patch alone did not close.

Source: Capsule Security, ShareLeak disclosure, CVE-2026-21520, CVSS 7.5, January 2026. Covered: CSO Online, April 2026.

CVE-2025-53773 · VISUAL STUDIO & COPILOT

Wormable Remote Execution via Prompt Injection in CI/CD

Disclosed in August 2025, CVE-2025-53773 demonstrated how prompt injection in Visual Studio and Copilot can escalate from data exfiltration into wormable remote code execution across CI/CD pipelines. When an agent can modify its own configuration files and security-relevant settings, injected prompts can propagate through shared repositories. The exploit chain self-replicates across development environments connected to the same pipeline. No single compromised repository contains the attack. The attack is the pipeline.

ALL THREE LAYERS ABSENT

The regulatory environment and system integration scope (Layer 1) did not enumerate that the agent operated inside a CI/CD pipeline with write access to shared repositories. The authorized scope (Layer 2) did not define that agents may not modify configuration files or security settings without human review. No Consequence Owner had accepted accountability for agent-initiated changes to the pipeline environment (Layer 3). When no layer constrains the agent's ability to act on injected instructions across a connected environment, a single crafted input becomes a self-propagating exploit.

Source: NVD, CVE-2025-53773, disclosed August 2025. Analysis: Persistent Security. Research summary: VentureBeat agentic AI security coverage, 2025-2026.

THE ARCHITECTURE PATTERN

All four vulnerabilities share one structure: an AI agent was designed as a helper but functioned as an ungoverned automation layer bridging untrusted external content and privileged internal systems. The Intent Architecture Stack exists to make that design decision explicit before deployment, not visible after an incident.

Referenced in

This framework is analyzed in the white paper

"Who Owns the Agent?" applies this framework to real-world deployment scenarios and maps it to named governance incidents from 2024 to 2026.

White paper

Who Owns the Agent?

The Intent Architecture Stack white paper, ten sections, complete diagnostic, named incident analysis, Intent Document template.

Executive FAQ

Questions leaders ask before deployment

These are the questions that separate an intent statement from an operating governance system. If the answer is not recorded before deployment, the authorization layer does not exist yet.

No. The stack sits before technical configuration. Identity, access control, monitoring, and logs are still required, but they need an organizational authorization record to be evaluated against.

The business owner of the use case should own and approve it. Technical teams can contribute implementation details, but authorization belongs to the function that accepts the operational and compliance consequences.

Re-authorization should be triggered by a change in business purpose, data access, action scope, accountable owner, regulatory context, or any incident that shows the agent may be operating outside its documented intent.

Yes. Existing deployments should be reviewed retroactively. Any field that cannot be completed becomes a governance finding and a practical starting point for remediation.

60-minute operating sprint

Apply this framework in one working session

Use this as a live governance exercise. Leave the session with named evidence, a visible gap, and a next owner rather than another discussion note.

Working session board

One pass through the framework. One evidence trail.

4

Steps

60

Minutes

1

Owner

Live

Decision

01

10 minutes

Write the agent's purpose in one sentence

Not a technical description. A statement of intent: "This agent reviews contract documents for non-standard clauses and flags them for legal review before signature." If you cannot write this sentence without looking at the code, Layer 2 does not exist for this agent. That is your first finding.

Output

Written evidence ready for the next governance decision.

02

15 minutes

Write what the agent cannot do

This is the boundary definition in Layer 2. Be specific: "Cannot access systems outside the legal SharePoint site. Cannot send email directly to external parties. Cannot make a contract modification." If you cannot write this list, the Authorized Scope component does not exist. That is your second finding.

Output

Written evidence ready for the next governance decision.

03

20 minutes

Name the Consequence Owner

Not the developer. Not IT. The business owner - the person who can answer "yes, this agent should exist and here is why" if an examiner calls. Write their name, title, and where this role sits in your accountability structure. If the person you are thinking of does not know they own this agent, Layer 3 does not exist. That is your third finding.

Output

Written evidence ready for the next governance decision.

04

15 minutes

Compare your three findings

If you found a gap in any layer, you have identified where your Intent Architecture work begins. If you found no gaps, apply the same exercise to the next agent. Most organizations find all three gaps in the first agent they examine.

Output

Written evidence ready for the next governance decision.

Research brief

Research brief

Source: NIST National Cybersecurity Center of Excellence, "Accelerating the Adoption of Software and AI Agent Identity and Authorization," February 5, 2026.

View all frameworks

Continue through the connected framework sequence above.