Skip to main content
Intelligence | May 12, 2026 | Microsoft Publishes Five-Level DDoS Resilience Maturity Framework for Consume...

GOVERNANCE & SECURITY

Your AI workloads are running. Do you know what they can access, what they cost, and who owns them?

Most organizations started deploying AI the same way they started with cloud in 2015 - fast, decentralized, each team making its own infrastructure decisions. Cloud took a decade to reach governance maturity. AI is moving faster, with more data exposure, and the infrastructure debt arrives sooner. Microsoft's April 2026 Cloud Adoption Framework guidance is explicit: AI workloads should be governed as first-class cloud workloads using existing Azure landing zones and Azure Policy, not treated as exceptions to normal platform engineering standards. A pattern is a pre-configured environment a new team provisions instead of designing from scratch - network isolation, identity, cost tagging, and security baseline inherited before the first line of code runs. The platform has the controls. The question is whether the organization has operationalized them.

Research Area7 topicsGovernance & Security

THE PROBLEM

What actually happens when AI workloads run ahead of infrastructure governance

These are not hypothetical risks. They are the patterns that appear consistently when organizations deploy AI workloads without established infrastructure.

What breaks first

Cost control, security consistency, and data accountability all drift before leadership realizes the foundation is missing.

These scenarios are the recurring shape of AI infrastructure debt once teams start provisioning independently.

Active scenario

Every team is solving the same problem from scratch

The data engineering team built their own Azure AI Foundry project with their own network configuration. The product team built a different one a month later and made different security decisions. Nobody compared notes. Three months in, you have four separate AI environments, none of them consistent, and a new team asking how to set up theirs. Microsoft's April 2026 Cloud Adoption Framework guidance describes exactly this failure mode and recommends policy initiatives and landing zone templates as the remedy. The answer to "how do we set up AI infrastructure" should not depend on which engineer a new team happens to ask. Without shared patterns, it always does.

Scenario 1 of 7

THE PATTERNS

What enterprise-grade AI infrastructure governance looks like in practice

A pattern is a template that a new team uses on day one to get a working, secure, governed AI environment without making the decisions from scratch. Here is what each pattern covers.

Landing zones

Microsoft's April 2026 Cloud Adoption Framework guidance states that organizations should use existing Azure landing zones for AI workloads rather than building separate AI-specific environments. AI-specific controls - network isolation, model restrictions, quota enforcement, agent identity policies - are added as overlays through Azure Policy initiatives and governance controls, not by rebuilding the platform. When a new team provisions from a landing zone template, they inherit network isolation, identity configuration, security baseline, and cost tagging without recreating those decisions from scratch. The team is operational faster and the security posture does not depend on which engineer they happened to ask.

Microsoft Foundry project templates with managed network isolation

Private endpoints configured by default, no public compute exposure

Entra-based identity and access configured at environment creation

Resource group structure that maps to cost attribution from day one

Pattern 1 of 4

FOUNDATION WORK

How organizations close the gap between platform capability and platform governance

Microsoft's AI control plane in April 2026 is more capable than it was twelve months ago. Azure AI Foundry Agent Service, Evaluations, and Continuous Monitoring reached general availability in March 2026. Entra Agent ID now auto-assigns identities to agents created through Foundry, Agent 365, and Copilot Studio. Defender for Cloud added AI workload monitoring. The Cloud Adoption Framework published updated AI governance guidance. The platform investments are real. The gap is between what the platform makes possible and what organizations have actually configured and enforced. Closing that gap follows a consistent sequence.

1

Step 1

Assess what already exists

Before designing anything, the current AI infrastructure needs to be inventoried. What Azure environments exist and how they were configured. What network isolation decisions were made and by whom. What data connections are undocumented. What costs are unattributed. What SDK versions workloads depend on - Azure Machine Learning SDK v1 reaches end of support June 30, 2026, meaning any v1-dependent workload is already in a narrowing remediation window. Which agents are running, under what identities, and whether they appear in the Entra Agent Registry. The output is a concrete map of what exists, what is outside a defensible governance boundary, and where the gaps are.

Inventory of all AI deployments across Azure subscriptions

Configuration review against Microsoft's AI security baseline

Data connection audit for documentation and scope

SDK and dependency version assessment for upcoming EOL risk

Cost attribution gap analysis

2

Step 2

Design patterns that match the environment

An organization is not a blank sheet. Existing Azure Landing Zones may already exist, or they may not. Existing Entra policies may already exist, or they may not. Teams may already be building on Microsoft Foundry, or they may just be starting. The patterns need to work with what exists. Rebuilding the cloud environment to adopt AI governance is not required. The AI-specific layer sits on top of what already works.

Landing zone template design for the subscription and network topology

Security baseline aligned to Microsoft's AI security benchmark

Data pattern design for the specific enterprise data sources

Cost attribution model that fits existing FinOps practices

3

Step 3

Deploy patterns and start using them

Patterns are infrastructure as code. When they are deployed, the result is a set of Bicep or Terraform templates that any team in the organization can use to start a new AI project correctly. Documentation explains how to use them, the first team is trained through the process, and support remains available for the first three deployments.

Landing zone templates deployed using existing Azure landing zone architecture with AI-specific policy overlays - not a separate AI environment

Security controls active, cost tagging enforced, network isolation in place

Runbook for new project provisioning that any team can follow

First team walked through provisioning from the template

4

Step 4

Let teams run this independently

The goal is not dependency. It is a foundation engineers own and maintain. Every decision is documented and the reason for it is recorded. The templates, runbooks, and pattern library remain in the environment. When Microsoft releases a new Foundry capability or changes a best practice, the team has enough context to update the patterns themselves.

Complete infrastructure as code in the repository, with comments

Decision log explaining every architectural choice

Runbook for ongoing pattern maintenance and updates

30-day availability for questions after handoff

Foundation Blueprint

A governed AI environment should feel repeatable before it feels fast.

Starting Condition

Teams arrive with AI demand, not with shared infrastructure decisions.

The foundation exists so identity, network isolation, and cost controls are inherited before a project starts moving quickly.

Controls

Identity, network, cost tagging

Applied once in the pattern instead of re-decided by each team.

Outcome

Faster setup with fewer surprises

Projects inherit the baseline and start from a controlled posture.

Operating Model

Reuse beats rework

The pattern becomes the default path for each new AI workload.

Pattern Sequence

1

Assess the current estate

2

Design reusable AI patterns

3

Deploy and operationalize templates

4

Hand off ownership with documentation

Baseline

1

shared starting point for new AI projects.

Drift

Down

less variation in how identity and security get configured.

Ownership

Clear

teams know what they inherit and what they maintain.

Why it works

Teams start from a known-good configuration instead of recreating the same control decisions from scratch.

The result is faster project setup, clearer accountability, and fewer surprises when a pilot starts behaving like production.

WHAT YOU GET

What a working foundation looks like

The deliverables are designed to leave the team with working assets, not just recommendations. Each output is meant to be reused by the next AI project without redoing the design work.

Active deliverable

Landing zone templates

Deployable Bicep or Terraform templates for AI workloads. Managed network isolation configured. Entra identity baseline in place. Cost tagging schema enforced. A new team provisions in hours, not weeks, and inherits security defaults without making configuration decisions.

Selected deliverable

01 of 5

RIGHT FIT

When this work is worth doing

Best Fit

When this foundation work becomes urgent

Teams are building AI workloads in Azure and each one looks different

You have had at least one cloud cost surprise tied to an AI project

Security or compliance has flagged AI infrastructure as a gap

You are planning to deploy more AI workloads and want a repeatable starting point

You have been in a situation where nobody could answer 'what data does this AI touch'

Can Wait

When this work can wait

Your AI footprint is a single Copilot deployment with no custom workloads

You already have a platform engineering team actively maintaining AI infrastructure patterns

You are purely using SaaS AI tools with no Azure compute or custom deployments

You have fewer than three AI projects planned in the next twelve months

COMMON QUESTIONS

What people ask before starting

The questions below are the ones that usually come up once teams realize this is infrastructure design work, not just another AI pilot.

No. The landing zone pattern covers the full AI infrastructure stack on Azure: Azure AI Foundry, Azure OpenAI, Azure Machine Learning, and the networking, identity, and storage services they depend on. If you are building AI workloads on Azure, this applies regardless of which specific AI service you are using.

FURTHER READING

AI infrastructure decisions made informally in 2024 and 2025 are becoming the debt organizations pay back in 2026.

The Governance Gap covers enterprise AI governance on the Microsoft stack - infrastructure patterns, Agent 365 governance architecture, Entra Agent ID as the identity foundation for agents, and the specific controls that prevent AI sprawl from becoming a security and cost problem. Agent 365 reaches general availability May 1, 2026. The control plane for AI governance on Azure is now more integrated than it has ever been. The gap is not in the platform. New editions publish every Tuesday at 7am.

Built on verified Microsoft documentation

Written from 12 years inside enterprise regulatory and biomedical research environments

Updated as the Microsoft platform evolves

Why read next

Continue from the foundation into the broader governance model.

The next step is not more theory. It is understanding how infrastructure patterns, ownership, and control design fit together as AI use expands across the enterprise.

Coverage

Infrastructure patterns, Agent 365 governance architecture, and the controls that prevent AI sprawl from turning into security and cost debt.