Skip to main content
Intelligence | May 12, 2026 | Microsoft Publishes Five-Level DDoS Resilience Maturity Framework for Consume...

ORIGINAL FRAMEWORKS

The Governance Readiness Matrix

Two axes. Agent count versus authorization coverage. One number tells you where you are. Most organizations cannot produce either number.

The reconciliation problem typically surfaces at the worst possible moment, during examination prep, when the team that deployed the original agents has turned over and nobody owns the registry. The matrix exists because most organizations discover their agent count by asking the wrong people.

v1.0  ·  April 2026Sougata Roy, sougataroy.com

Free to read and cite with attribution to Sougata Roy and sougataroy.com. Do not republish, rebrand, or claim authorship of any framework, term, or model as your own.

Governance Readiness Matrix

v1.0
AxisAgent Count
AxisAuthorization Covera…
Authorization CoverageHigh
Pre-Authorization DisciplineLOW COUNT / HIGH COVERAGE
Governed at ScaleHIGH COUNT / HIGH COVERAGE
Pilot ExposureLOW COUNT / LOW COVERAGE
Agent SprawlHIGH COUNT / LOW COVERAGE
Most common
Low Agent CountHigh Agent Count

Where most organizations land

High agent count, low authorization coverage. The ratio tells you where you are. Most organizations discover they cannot produce either number on demand.

AGENT PATHWAY DECISION TREE

Approved vs Unauthorized Agent Pathway Decision TreeApproved vs Unauthorized Agent PathwayA practical decision path for separating governed agent activityfrom accumulated governance debt.Agent Action InitiatedThe action is treated as governed only if the record survives all four gates.Missing Evidence PathAuthorization Path01Authorization recordexists?UNAUTHORIZED PATHWAYGovernance Debt AccumulatedNo evidentiary artifact.Examination exposure.NOYES02Human ownercurrently active?OWNERSHIP GAPReview Trigger RequiredAgent is compliant.Accountability is not.NOYES03Use case boundarydocumented?INTENT GAPScope UnverifiableBoundary missingfrom the record.NOYES04Review trigger setand current?GOVERNANCE DEBTMaturity Stage: AccumulationReview evidence isstale or missing.NOAUTHORIZED PATHWAYExamination ReadyAuthorization, ownership, scope, and review all documented.YES

The problem

Why this framework exists

The governance gap in enterprise AI is not a technology problem. It is a velocity problem. Organizations are deploying AI systems faster than they are building the organizational structures to govern them. The result is a gap between what is deployed and what is governed that grows with every new deployment and compresses with every remediation effort.

Inside the organization

The governance question

For each AI system currently operating in your environment, can your organization produce a documented authorization record, a named accountable owner who knows they own it, and evidence of a compliance review conducted before deployment?

The framework

The two axes

The matrix has two axes. Understanding what each one measures is the prerequisite for placing your organization accurately on it.

Axis

Agent Count

The horizontal axis is Agent Count. This is the actual count of AI systems operating in the environment, including systems deployed without formal approval. Velocity is what IT thinks is deployed. Count is what a cross-functional inquiry produces. These are not the same number.

Axis

Authorization Coverage

The vertical axis is Authorization Coverage. This is the percentage of deployed AI systems with all four governance artifacts verifiably in place. This is not a tier score or a policy commitment. It is a ratio. It can be calculated from two numbers. If you cannot calculate it today, that inability is the finding.

Governance artifacts

Required for an agent to count as governed

  • A documented authorization record
  • A named accountable owner who knows they own it
  • A defined review date that has not passed
  • A record of compliance review conducted before deployment

The matrix

The four quadrants

The Governance Readiness Matrix gives organizations a precise, calculable way to understand where they are. Not as an abstract self-assessment. Not as a maturity model that requires expert scoring. As a ratio, calculated from two numbers your organization either has or cannot produce - and the inability to produce them is itself a finding.

Low Agent CountHigh Agent Count

LOW COUNT / HIGH COVERAGE

Pre-Authorization Discipline

Low deployment velocity and high governance maturity. Few deployments, but each one is fully governed. The organization has built governance discipline before scaling. This is the right starting position for an enterprise that has not yet deployed AI broadly. The risk here is specific and worth naming: governance processes designed for low volume often do not survive the transition to scale. Organizations in this quadrant should redesign their governance process for the velocity they expect, not the velocity they currently have.

Good starting position, specific risk.

HIGH COUNT / HIGH COVERAGE

Governed at Scale

High deployment velocity and high governance maturity. Every new deployment goes through an established governance process. The organization can produce authorization records for its AI systems on demand, as a routine operational capability, not in response to a triggering event. The shadow agent population is low and declining. The intake process is enforced consistently, including for urgent deployments. This is the destination. It is currently occupied by a small minority of enterprises. The organizations that are there did not arrive by accident. They built the intake process before they needed it.

Destination state.

LOW COUNT / LOW COVERAGE

Pilot Exposure

Low deployment velocity and low governance maturity. Few deployments, and governance is not yet in place. The organization is in an AI pilot phase. This quadrant is only genuinely low-risk if two conditions hold simultaneously: the pilots remain genuinely limited in scope and data access, and governance infrastructure is being built before scale begins. Most organizations in this quadrant believe they have more time than they do. The transition from pilot to production happens faster than governance programs develop.

Only low-risk if two conditions hold.

HIGH COUNT / LOW COVERAGE

Most common

Agent Sprawl

High deployment velocity and low governance maturity. This is where most enterprises are in 2026. Deployment has outpaced governance. Many AI systems are operating without authorization records, without named accountable owners, or without compliance review. The organization knows it has AI systems running. It does not know the complete count, and it cannot produce governance artifacts for a significant portion of them on demand. The signal for this quadrant is the gap between what leadership thinks is deployed and what a discovery exercise reveals. The shadow agent population grows with every passing quarter in which no intake process exists.

MOST COMMON

The path forward

How to apply it

Step 1 is establishing the velocity count. Inventory every AI system currently operating in your environment. The count must include systems deployed through official channels and systems deployed by teams without formal approval. Ask IT, compliance, procurement, and individual business units separately, then compare their answers. The gap between what any single source says and what a cross-functional inquiry produces is the baseline measure of how much shadow AI exists in your environment. The count that matters is the actual count, not the approved count.

Board governance review

The Board Governance Review: Four Decisions That Move Quadrants

This framework guides the board through a four-step evaluation to move from "Urgent" or "Prepare" postures toward a "Sustain" posture, based on deployment velocity and governance maturity.

D1

Which quadrant are we currently in?

Current posture is determined by agents in production versus existing governance documentation.

Accurate answer -> correct intervention.

Estimated answer -> misallocated resources.

D2

What is our trajectory?

Maturity is defined by whether governance work keeps pace with deployment velocity.

Governance growing with deployment -> on track.

Deployment outpacing governance -> debt accumulating.

D3

What resources move us to the target quadrant?

Movement requires specific investment in registry infrastructure, documentation programs, and headcount.

Cost of governance vs cost of governance debt - compare before deciding.

D4

What is the regulatory exposure cost of our current posture?

In regulated environments, governance posture has a quantifiable liability value.

Unquantified exposure is the most expensive governance debt.

Moving quadrants is a resource decision, not a values decision. Leadership that understands which quadrant it is in can make the investment case. Leadership that does not is making the decision by default.

Why it lasts

Why it lasts

The ratio is not a target. It is a current state. The work of governance is maintaining it at a level that reflects the organization's regulatory obligations and risk tolerance, and improving it consistently over time.

Who it is for

What good looks like

Your governance coverage rate is above 80 percent and is being actively maintained. Every new deployment goes through the intake process before it goes live. The organization can produce authorization records for deployed AI systems on demand, not in response to an incident, but as a routine operational capability. When the velocity count changes, the coverage rate is recalculated within a defined time window and the result is reported to the person accountable for the organization's AI governance posture.

Quick reference

Download the Quick Check Card

A one-page reference card for calculating your governance position and identifying your quadrant in a single working session.

QUICK CHECK

Governance Readiness Matrix: Where Do You Stand?

Calculate your Governance Debt Ratio and identify your quadrant in two inputs.

Download PDF

Executive FAQ

Questions leaders ask before deployment

These are the questions that separate a maturity policy from an operating governance system. If the answer is not visible in the matrix, the coverage ratio is still an assumption.

60-minute operating sprint

Apply this framework in one working session

Use this as a live governance exercise. Leave the session with named evidence, a visible gap, and a next owner rather than another discussion note.

Working session board

One pass through the framework. One evidence trail.

4

Steps

60

Minutes

1

Owner

Live

Decision

01

15 minutes

Count deployments

How many AI agents or AI-enabled tools does your organization currently operate? Include Copilot, Copilot Studio agents, any automation using AI APIs, and any third-party AI tools accessed by employees. Write the number down. If you cannot produce a number, that is itself a governance maturity finding - you are in the deployment velocity axis without knowing your position.

Output

Written evidence ready for the next governance decision.

02

15 minutes

Count governance artifacts

For each deployment, how many have: a documented authorization record, a named human owner who knows they own it, a defined review date that has not passed, and a record of compliance review before deployment? Count how many have all four. This is your governance maturity score.

Output

Written evidence ready for the next governance decision.

03

15 minutes

Calculate the ratio

If you have 40 deployments and 4 have full governance artifacts, your ratio is 10 percent. This places you on the matrix. Most organizations are in the upper-left quadrant - high agent count, low authorization coverage - before they know it.

Output

Written evidence ready for the next governance decision.

04

15 minutes

Identify your highest-risk gap

Among the deployments without governance artifacts, which one has access to the most sensitive data? Which one would be most difficult to explain to an examiner without documentation? That is where remediation starts.

Output

Written evidence ready for the next governance decision.

Referenced in

This framework is analyzed in the white paper

"Who Owns the Agent?" applies this framework to real-world deployment scenarios and maps it to named governance incidents from 2024 to 2026.

White paper

Who Owns the Agent?

The Intent Architecture Stack white paper, ten sections, complete diagnostic, named incident analysis, Intent Document template.

Research brief

Research brief

Source: McKinsey and Company, "State of AI Trust in 2026: Shifting to the Agentic Era," March 25, 2026. Survey of approximately 500 organizations, December 2025 to January 2026.

View all frameworks

Continue through the connected framework sequence above.