Skip to main content
Intelligence | May 12, 2026 | Microsoft Publishes Five-Level DDoS Resilience Maturity Framework for Consume...
Back to newsletter

NEWSLETTER

After May 1, "We Didn't Know" Is No Longer a Defense

March 20, 2026

On enterprise AI accountability and the governance question every board will have to answer.

In January 2021, the Dutch government resigned over a childcare benefits scandal. For years, the tax authority had used a discriminatory anti-fraud system with algorithmic risk scoring that wrongly targeted about 26,000 families. It ordered them to repay tens of thousands of euros in benefits. Many were driven into severe debt and in some cases eviction. Some families experienced children being placed out of home. A parliamentary inquiry called it "Unprecedented Injustice."

When families demanded answers, they asked a question that should have been simple. Who decided this was fraud?

Nobody could answer it. Not the civil servants who received the risk scores. Not the managers who approved the system. Not the ministers who oversaw the administration. The algorithm had made the decisions. The humans had deferred to it. The accountability chain had a gap where a human decision-maker should have been. A government fell into that gap.

That case is not about AI agents. It is not about Microsoft. It happened before Copilot Studio existed. But on May 1, 2026, every enterprise deploying agents on the Microsoft stack is about to inherit the same structural risk - at speed, at scale, and now with complete audit trails that will make the accountability question much harder to defer.

THE QUESTION THIS EDITION ANSWERS

Agent 365 launches in six weeks. It will give your organization deep visibility into AI agents in your Microsoft environment. Actions logged. Identities verified. Decisions made auditable.

What it will not give you is the answer to the question the Dutch families asked.

Who decided?

THE EVIDENCE

Vasu Jakkal's March 9, 2026 Microsoft Security Blog post is precise about what Agent 365 delivers. A unified control plane for agents. An Agent Registry. Behavior and performance observability. Microsoft Entra Agent ID - a unique verified identity for every managed agent, treated as an auditable entity alongside users and applications.

This is genuinely significant. The shadow agent problem finally has a control plane. Agents that were invisible become visible. Actions that were untracked become logged. The technical governance fabric Microsoft has built is sophisticated.

Here is what Jakkal's post does not say, and what the product cannot do.

It cannot name the human accountable for the decision to deploy that agent. It cannot document who approved the thresholds at which the agent acts autonomously versus escalates to a person. It cannot show a regulator the override log - the record of moments when a human reviewed the agent's decision and chose to intervene or not. It cannot answer the question a board will ask when an agent causes harm: who accepted the residual risk?

Microsoft's own Cloud Adoption Framework states this directly, in a document most enterprises have not read. It instructs organizations to establish responsible AI standards, empower a cross-functional governance team, embed responsible AI checkpoints into workflows, require formal sign-offs for high-risk agents, and define escalation paths and shutdown authorities. The document is unambiguous. These are organizational design tasks. Agent 365 provides the substrate. Building on it is your work.

The technical tools will be in place on May 1. The organizational accountability architecture is a choice.

Here is why that distinction matters more than it has ever mattered before.

THE STAKES

Before Agent 365, an enterprise facing an AI governance question had one answer available, however inadequate: we did not have visibility. We did not know what our agents were doing.

That answer expires on May 1.

After Agent 365, you will have deep audit capability across agent actions in your environment. Managed agents can be given a verified identity traceable through Entra. Actions will be auditable in eDiscovery. You will know.

Which means when something goes wrong - a customer harmed, a regulatory violation, a decision nobody intended - the regulator will not ask whether you had visibility. You did. The logs will prove it.

They will ask what you did with that visibility. Who owned the consequences. Who had the authority to stop the agent. Who reviewed the decision. Who was accountable.

The EU AI Act’s Article 14 requires that people overseeing high‑risk AI systems be able to duly monitor how they operate, remain aware of automation bias, and, in any particular situation, decide to disregard, override, or reverse the system’s output. GDPR Article 22 gives individuals the right not to be subject to decisions based solely on automated processing that produce legal or similarly significant effects, subject to narrow exceptions that require additional safeguards, including access to human intervention and the ability to contest the decision. The Federal Reserve’s and OCC’s SR 11‑7 model risk guidance, which supervisors expect banks to apply to models, including AI systems used in credit and trading decisions, calls for strong model governance with accountable management, independent validation, and mechanisms to escalate and remediate identified weaknesses. None of these regulatory standards is satisfied by the mere existence of an audit log.

In high‑stakes federal environments like financial markets and health, governance frameworks were built so that humans - not systems - carried accountability. Public rules and guidance around algorithmic trading, for example, emphasize clearly assigned responsibilities, “kill switch” controls, pre‑deployment checks, and independent oversight, with governance structures designed before systems go live.

Most enterprises have built the technical layer. Almost none have built the accountability layer underneath it.

THE ARCHITECTURE

Agent 365 will give you an Agent Registry - a centralized inventory of agents and their key attributes. That registry can serve as the foundation of the accountability architecture you need to build on top of it.

For every high-impact agent in your environment, you need four things documented before May 1 or immediately after.

First: a named human accountable owner. Not a team. Not a function. A person, with a title and an accountability statement, who has accepted the residual risk of that agent operating in your environment. That person is who the regulator will call.

Second: documented decision rights. At what thresholds does this agent act autonomously? At what thresholds does it escalate to a human? Who approved those thresholds? When were they last reviewed?

Third: a stop authority. Who has the authority to take this agent offline? What conditions trigger that action? Is there a documented procedure? Have you tested it? Public rules and proposals for algorithmic trading governance have emphasized “kill switch” controls and other mechanisms to halt problematic systems. AI agents making consequential decisions in your enterprise require the equivalent.

Fourth: an oversight record. Not just that the agent acted. That a human reviewed the action, had the ability to intervene, and either did or consciously chose not to. That record is the difference between automated processing with meaningful human involvement and a decision left entirely to an algorithm - the distinction regulators look for when they assess whether your oversight meets standards in laws like the EU AI Act and GDPR.

This is not a compliance checklist. It is the organizational design work that makes the technical capabilities Agent 365 delivers mean something beyond a control plane that tells you what happened after it went wrong.

THE CLOSE

The Dutch families spent years asking a question a government could not answer. The records existed. The system had logged enough to show what had happened. Nobody had designed the accountability chain that would have connected those records to a human who could be held responsible.

On May 1, your organization will have more visibility into your AI agents than the Dutch tax administration ever had into its algorithm.

The question is whether you will also have what they did not.

Not logs. Not identity management. Not audit trails.

An answer to who decided.

Sougata Roy is an enterprise architect with 26 years in complex systems and 12 years inside SEC, CFTC, and NIH. He writes The Governance Gap on the accountability question autonomous AI makes impossible to defer.

  1. Sources:
    Vasu Jakkal, "Secure Agentic AI for Your Frontier Transformation," Microsoft Security Blog, March 9, 2026.
  2. Microsoft Cloud Adoption Framework, "Establishing Responsible AI Policies for AI Agents."
  3. EU AI Act Article 14, Human Oversight.
  4. GDPR Article 22, Automated Individual Decision-Making.
  5. Federal Reserve SR 11-7, Guidance on Model Risk Management, April 4, 2011
  6. Dutch childcare benefits scandal: Reuters, "Dutch Government Quits Over Childcare Subsidies Scandal," January 15, 2021.
  7. NIST AI Risk Management Framework 1.0.
  8. IDC projection: 1.3 billion agents in circulation by 2028, cited in Charlie Bell, Microsoft Official Blog, November 5, 2025