Skip to main content
Intelligence | May 12, 2026 | Microsoft Publishes Five-Level DDoS Resilience Maturity Framework for Consume...

CORE CONCEPTS

The AI agents you approved are not the problem. The ones you don't know about are.

Agent Sprawl is the proliferation of AI systems across an enterprise without corresponding governance architecture. It operates at three distinct tiers. Most organizations are actively managing only the first, and the third is where the largest incidents are now occurring.

Free to read and cite with attribution to Sougata Roy and sougataroy.com. Do not republish, rebrand, or claim authorship of any framework, term, or model as your own.

THE PROBLEM

The inventory that doesn't match reality

The IT team has a list of approved AI tools. The business units have a different count. The developers have built agents that appear in neither list. The citizen developers in Operations built automations in Copilot Studio that nobody in IT knows exist. Somewhere in the environment, three agents approved eighteen months ago are still running under the credentials of a person who left the organization in Q3.

Ask the CISO how many AI agents are operating in the environment. The number they give you is the number the governance team approved. The actual count is higher. In most enterprises in 2025 and 2026, significantly higher.

Reco's 2025 State of Shadow AI findings reported that 71 percent of office workers used AI tools without IT approval, and nearly 20 percent of organizations had already experienced data breaches or leaks attributable to unauthorized AI use. Cyberhaven's 2026 report found that 39.7 percent of all AI data movements involve sensitive data. IBM's 2025 Cost of a Data Breach report found that high levels of shadow AI added approximately $670,000 to the average breach cost.

The gap between the approved list and the actual count is Agent Sprawl. The gap has three tiers that require three different governance responses.

INSIDE THE ORGANIZATION

The governance question

How many AI agents are currently operating in your environment, not the count that were formally approved, but the count that are actually running? If you cannot produce both numbers and explain the gap between them, Agent Sprawl is active in your organization and its scale is unknown.

THE CONCEPT

Three tiers. Three different governance responses.

Agent Sprawl is not one problem. It is three distinct proliferation patterns operating simultaneously, each with a different cause, a different risk profile, and a different governance response. An organization that solves Tier 1 has not solved Tier 2. An organization that solves both has not solved Tier 3.

T1

TIER 1

Employee shadow AI

Individual employees use personal accounts, personal devices, or browser-based access to AI tools for work tasks without IT approval, organizational visibility, or any assessment of what organizational data is being processed. This is the most visible tier and the one most organizations have begun to address with policy and monitoring. The governance response is policy and technical enforcement: acceptable use policies that specifically address AI, DLP controls extended to browser-based AI usage and clipboard flows, and CASB visibility into which AI tools are accessing organizational data from corporate endpoints.

Evidence

Reco reported that 71 percent of office workers used AI tools without IT approval. Its 2025 report also identified long persistence windows for unsanctioned tools. Public reporting on Samsung engineers pasting proprietary source code into ChatGPT remains a canonical early shadow AI example.

T2

TIER 2

Organizational procurement without central visibility

Business units independently adopt AI tools and vendors through department-level purchases, pilot programs that became permanent, or vendor integrations that bypassed central IT procurement. The agents exist in the environment with legitimate business purposes. The problem is that no central function has visibility into the full inventory, assessed the combined risk surface, or assigned governance accountability. The governance response is inventory and procurement governance: mandatory AI intake for all deployments, cross-functional discovery for existing deployments, and a central registry that reflects the actual count rather than the approved list.

Evidence

Reco reported that companies with 11 to 50 employees averaged 269 shadow AI tools per 1,000 employees. Larger organizations still showed high unsanctioned AI density. These are often motivated employees solving real problems with tools their organizations made reachable.

T3

TIER 3

Authorized agents with over-permissive operational scope

Agents were formally approved, correctly configured, and deployed with legitimate business authorization, but their operational permissions were never bounded to what their documented purpose actually requires. The organization knows these agents exist. The governance failure is in what they are permitted to do once deployed. This is the tier producing the largest incidents, and it is the least addressed. The governance response is operational constraint architecture: defining what the agent may not do regardless of technical capability, what system changes require human approval before execution, and what conditions trigger mandatory human review before deployment.

Evidence

Oso's Agents Gone Rogue register tracks the Meta internal agent Sev-1 exposure in March 2026 and the Replit coding assistant production database deletion. Both cases show agents or agent outputs operating with insufficient operational constraints.

THE COMPOUNDING PROBLEM

The inventory problem compounds with every deployment cycle

Most enterprise AI governance programs in 2025 and 2026 are focused on Tier 1. Shadow AI policies, DLP extensions to AI tools, and CASB dashboards showing unsanctioned usage are the right interventions for Tier 1. They are necessary. They are not sufficient.

An organization that has solved Tier 1 has addressed the employee behavior problem. It has not addressed the organizational procurement problem. Business units continue adopting AI vendors through channels that bypass the new controls because the intake process applies to IT-procured tools and the business unit purchased this one through a SaaS subscription on a corporate card. The Tier 2 population grows while the Tier 1 population is being managed.

An organization that has solved Tier 1 and Tier 2 has a complete inventory and an intake process. It has not addressed what happens to agents after they are approved. Tier 3 sprawl occurs inside the governed population. The agents are in the registry. The authorization records exist. But the operational permissions were never bounded, the change-actor question was never asked, and nobody defined what the agent is not allowed to do regardless of what it is technically capable of doing.

The Governance Readiness Matrix measures where an organization sits on agent count versus authorization coverage. Most organizations that have made progress on Tier 1 discover that their Tier 2 and Tier 3 populations have been accumulating Governance Debt at the same pace they were reducing it in Tier 1. The ratio is not improving. It has shifted.

MICROSOFT STACK

Where Agent Sprawl concentrates in Microsoft 365 environments

Organizations standardized on Microsoft 365 face a specific Agent Sprawl pattern that differs from the general enterprise picture. Microsoft 365 is the productivity layer for most of the corporate data these organizations govern: email, documents, SharePoint content, and Teams conversations. When employees bring external AI tools into their workflows, they are almost always bringing them into contact with Microsoft 365 data.

Copilot Studio's no-code agent builder accelerates Tier 2 and Tier 3 simultaneously. A motivated business analyst can deploy a Copilot Studio agent against SharePoint, Teams, and organizational data in an afternoon without writing code and without involving the security team. The agent appears in the Microsoft 365 admin center inventory. It may not appear in the IT governance registry. The permissions it inherited from the creator's account may significantly exceed what its documented purpose requires.

Microsoft's 2026 security guidance for Copilot Studio agents names prompt injection, unsafe orchestration, email-based data exfiltration paths, and misconfigured agent workflows as risks organizations must detect and prevent.

The Tenant Agent Reconciliation Framework is the operational tool for surfacing the actual Microsoft 365 agent population across M365 admin center inventory, Power Platform, Azure, and the gap between those three counts and what the IT governance registry contains.

THE FRAMEWORKS

Four frameworks for addressing Agent Sprawl at scale

Agent Sprawl is not addressed by a single governance action. The three tiers require four distinct operational frameworks working in sequence: surface the actual population, measure the coverage gap, close the gap through a structured process, and prevent new ungoverned agents from accumulating through operational controls.

RELATED CONCEPTS

Where Agent Sprawl sits in the accountability structure

Agent Sprawl is the scale problem. The other concepts describe what happens at the level of individual agents when sprawl is not contained.

Governance Debt is what Agent Sprawl produces at organizational scale. Every agent deployed without authorization, a named owner, and a compliance review is a unit of Governance Debt. Agent Sprawl is the mechanism by which Governance Debt compounds faster than organizations can address it.

The Accountability Assumption operates inside every ungoverned agent in the sprawl population. Each agent without a named Consequence Owner is an agent where the assumption is in place, and the assumption compounds with every agent added to the ungoverned inventory.

Intent Architecture is the organizational design layer that prevents Tier 2 and Tier 3 sprawl at the source. An intake process enforced consistently, including for deployments described as urgent, limited, or temporary, stops ungoverned agents from entering the population. The intake process is what converts the Authorization Coverage Lifecycle from Accumulation to Resolution.

The Intent Gap is total for every agent in the sprawl population that has no documented intent. An agent with no authorization record has no documented purpose, no explicit prohibitions, and no named accountable owner watching the distance between intended and actual behavior.

WHAT GOOD LOOKS LIKE

When Agent Sprawl is under control

The organization's agent registry reflects the actual count of AI agents operating in the environment, not just the count of formally approved ones. The Tenant Reconciliation Gap is declining quarter over quarter as the intake process matures. The Governance Readiness Matrix places the organization in the high count, high coverage quadrant because authorization coverage has kept pace with deployment velocity.

For Tier 1: employees have clear guidance on which AI tools are permitted for work use, what organizational data may be processed through which tools, and where the boundaries are. DLP and CASB controls extend to browser-based AI usage. The shadow AI population is monitored and declining.

For Tier 2: every new AI deployment, regardless of whether it originates from IT, a business unit, or a citizen developer using Copilot Studio, goes through the intake process before it enters production operation. The intake process is enforced consistently, including for deployments described as urgent. The business unit that wants to bypass the process encounters the same gate as IT.

For Tier 3: every authorized agent has a documented operational scope specifying what it may not do regardless of technical capability, what system changes require human approval before execution, and what conditions trigger mandatory human review. The Consequence Owner for each agent can describe these constraints without reading the technical configuration.