Skip to main content
Intelligence | May 12, 2026 | Microsoft Publishes Five-Level DDoS Resilience Maturity Framework for Consume...

CORE CONCEPTS

The organizational design layer that defines what an AI agent is authorized to do, before it does anything.

Most organizations configure agents. Fewer authorize them. Intent Architecture is the difference between those two things, and the difference is where accountability lives when something goes wrong.

Free to read and cite with attribution to Sougata Roy and sougataroy.com. Do not republish, rebrand, or claim authorship of any framework, term, or model as your own.

THE PROBLEM

What happens when nobody wrote down what the agent was supposed to do

The agent was configured correctly. Tested correctly. The security team reviewed it. Six months after deployment, it began routing renewal quotes to customers whose accounts had already been closed. The permissions were correct. The integration was working. The problem was that nobody had documented what the agent was actually authorized to accomplish, what data conditions it was allowed to act on, or who was accountable when its outputs caused harm.

When the question came, and the question always comes, the organization had no record showing that any human had formally decided this agent should exist, what it should do, and under what conditions it was appropriate. The configuration existed. The authorization did not.

That gap is the absence of Intent Architecture.

INSIDE THE ORGANIZATION

The governance question

Before any AI agent is deployed in your environment, does your organization have a formal record of what it was authorized to do, who made that authorization, and under what conditions that authorization must be reviewed?

THE CONCEPT

What Intent Architecture is

Intent Architecture is the organizational design work that must happen before any AI agent goes live. It covers three things: what the agent is authorized to do, expressed in plain language that a compliance officer can review; who made that authorization, recorded as a formal organizational decision; and what the boundaries are, including what the agent may not do, what data it may not reach, and what human review is required before it acts on certain conditions.

The word "architecture" is deliberate. Architecture is designed before construction begins. An organization that configures an agent and then decides what it should have been authorized to do is doing remediation under pressure, in the dark, after the fact.

Intent Architecture precedes technical controls. A well-configured agent operating without documented authorization, a named Consequence Owner, and a defined review process is still a governance gap. The configuration is correct. The organizational design layer is missing.

AUTHORIZATION

What the agent is permitted to do

A formal record of the agent's authorized purpose, the specific actions it may take, and the explicit prohibitions that apply regardless of technical capability. Written in plain language. Signed before deployment.

ACCOUNTABILITY

Who owns what it does

A named Consequence Owner, a specific individual who accepted accountability for the agent's behavior before it went live and who is reachable when something goes wrong.

ARCHITECTURE

The design decisions made before deployment

The regulatory environment mapped. The data touchpoints documented. The system integrations scoped. The human review conditions defined. These decisions made before the agent runs determine whether governance holds under examination.

THE DISTINCTION

Why the word is architecture and not policy

A policy describes what an organization intends to do. Architecture describes what an organization has actually built. The gap between the two is where most enterprise AI governance fails.

An organization can have a comprehensive AI governance policy and still deploy agents without authorization records, without named Consequence Owners, and without documented boundaries. The policy says the right things. The deployments do not reflect the policy. Intent Architecture is the design work that closes that gap by requiring that specific organizational decisions are made and recorded for each agent before it operates.

In June 2025, security researchers disclosed EchoLeak, CVE-2025-32711, a critical vulnerability in Microsoft 365 Copilot rated CVSS 9.3. The attack worked because Copilot's design allowed external content, including crafted Outlook emails, to function as high-privilege instructions inside the tenant without any user interaction. The configuration was Microsoft's. The architecture decision about what external content should be permitted to trigger agent action belonged to the deploying organization. That decision was not made before deployment, and it was not documented as a governance requirement. The result was a zero-click data exfiltration path that required a server-side patch to close.

In February 2026, Orca Security disclosed RoguePilot, a vulnerability in GitHub Codespaces where Copilot acted on malicious instructions injected into GitHub Issues. When a developer launched a Codespace from a tainted issue, Copilot consumed the issue text as context, executed attacker-crafted instructions, and exfiltrated the GITHUB_TOKEN, enabling full repository takeover. The agent was operating exactly as designed. The organizational decisions about what it should be permitted to do inside a Codespace environment, including access to environment credentials, had not been made before deployment.

Both incidents share one architecture failure. The deploying organization had not formally defined what the agent was authorized to do. The platform's configuration determined the agent's behavior. The organization's governance design did not.

EchoLeak

External content reached Microsoft 365 Copilot as instructions inside the tenant.

RoguePilot

Issue text became execution context inside GitHub Codespaces.

THE FRAMEWORK

Intent Architecture in practice

The Intent Architecture Stack is the operational framework that implements this concept across three organizational layers. Layer 1 documents the context, including the regulatory environment, the affected stakeholders, and the full scope of system integrations, before any intent is defined. Layer 2 records the intent, including the agent's authorized purpose, its permitted actions, its explicit prohibitions, and its expected outputs. Layer 3 establishes the governance structure, including the named Consequence Owner, the review cadence, and the escalation path.

Most organizations build Layer 1 informally, build Layer 2 partially, and skip Layer 3 entirely until something forces the question.

The Organizational Agent Controls framework operationalizes Layer 3 specifically, defining the five governance decisions every organization must make before any agent goes live, and distinguishing what the platform enforces from what the organization must design.

The Agent Substrate Readiness Model applies Intent Architecture to specific systems of record, Salesforce, ServiceNow, SAP, Jira, and the Microsoft stack, where agents are now reading and writing business-critical state. The question it answers is not whether the system can support agents technically. It is whether the organization has authorized agents to use those technical capabilities.

When Intent Architecture is missing, the Authorization Coverage Lifecycle begins accumulating Governance Debt from the moment the agent goes live. Each day the agent operates without a documented authorization record, a named owner, and a defined review process is another day of debt that must be addressed either deliberately or under external pressure.

RELATED CONCEPTS

Where Intent Architecture sits in the accountability structure

Intent Architecture is the design layer. The other four concepts describe what happens when it is absent, partial, or not enforced.

Governance Debt accumulates from the moment an agent goes live without complete Intent Architecture in place. Each unauthorized deployment is a unit of debt that must be addressed either through deliberate remediation or under external examination pressure.

The Intent Gap is the distance between what an organization genuinely intended an AI system to do and what it actually does in production. Intent Architecture is the organizational design work that closes that gap before the agent runs, not after the gap becomes visible in outputs that cause harm.

The Accountability Assumption is the implicit organizational belief that accountability for an agent's decisions resides with the vendor, the platform, or another team. Intent Architecture makes that assumption explicit and answerable because a complete authorization record names the Consequence Owner who accepted accountability before the agent went live.

Agent Sprawl describes what happens when Intent Architecture does not govern deployment at scale. Agents multiply faster than authorization records are produced. The gap between deployed agents and governed agents widens with every sprint cycle that includes deployment but not authorization design.

WHAT GOOD LOOKS LIKE

When Intent Architecture is in place

Any person in the organization, a new compliance officer, an external auditor, or a board member, can be handed the authorization record for any deployed agent and answer the following questions without additional research: what is this agent authorized to do, what is it explicitly prohibited from doing, what data can it access, who approved its deployment, and who is the Consequence Owner accountable for its ongoing behavior. The record is complete. The owner is reachable. The review date has not passed.

That is the standard. Most organizations are not there for most of their deployed agents. The Authorization Coverage Lifecycle measures exactly how far the gap is. Intent Architecture is the design work that closes it.

RESEARCH BASIS

  • Aim Security, EchoLeak disclosure, CVE-2025-32711, CVSS 9.3, June 2025.
  • Orca Security, RoguePilot research, February 2026.
  • NIST National Cybersecurity Center of Excellence, "Accelerating the Adoption of Software and AI Agent Identity and Authorization," February 5, 2026.
  • Cloud Security Alliance Agentic NIST AI RMF Profile, April 1, 2026.