NEWSLETTER
Enron Was Audited Too.
March 24, 2026
NEWSLETTER
March 24, 2026
The meeting had been running for forty minutes when the examiner asked the one question the compliance team had not prepared for.
The Agent 365 dashboard was open on the screen. Every action logged. Every decision timestamped. Twelve weeks of agent activity across forty-seven workflows, all of it searchable, all of it exportable, all of it consistent with what the Microsoft documentation promised. Three months of work by a compliance team that had done everything right.
"Show me the document where your organization decided this agent was authorized to act on customer accounts," she said.
Not the technical configuration. The organizational decision.
The room produced the particular silence that anyone who has spent time inside a regulatory examination will recognize immediately. Not thinking. Realizing.
THE QUESTION THIS EDITION ANSWERS
Microsoft's biggest announcement this month confirms that Agent 365 will treat AI agents as auditable entities alongside users and applications from May 1, 2026. Every enterprise reading that announcement has exhaled slightly. Most of them should not have. Auditability and accountability are related concepts. They are not interchangeable ones.
WHAT HAPPENED
Microsoft confirmed this month that Agent 365, generally available May 1, extends Microsoft Purview's audit and eDiscovery capabilities to AI agents, treating them as auditable entities alongside users and applications. Every prompt, every action, every data interaction can be logged, timestamped, and used for legal and regulatory response. This is genuinely significant. Microsoft has built much of the infrastructure most enterprises have been waiting for.
The announcement also confirms something the organizations paying closest attention already suspected. Microsoft can log what your agents did. What it does not provide is a record of whether what your agents did was what anyone in your organization actually decided they should do. Those are different services, and only one of them ships on May 1.
WHAT IT MEANS
Enron had auditors. This is worth understanding before moving to the next paragraph. Arthur Andersen was present and formally engaged as auditor. The audits found what they were structured and incentivized to find. What Andersen was never asked to find, and therefore never looked for, was whether the organizational decisions underneath the financial records were what they appeared to be. The audit was real. The accountability architecture for what was actually happening was fictional. One did not compensate for the other. This is not an accounting scandal. It is a governance lesson.
The same distinction applies to every AI agent your organization is running right now.
Auditability answers the question of what happened. It creates a record of every action an agent took, every data point it accessed, every decision it made or influenced. It is invaluable for post-incident investigation, regulatory response, and legal discovery. No reasonable enterprise should operate without it.
Accountability answers a different question entirely. It asks whether what happened was within the boundaries of what someone in the organization formally decided this agent was supposed to do. Not the vendor configuration. The organizational decision, with a human name attached, made before the first execution.
Agent 365 closes the observability gap. The other gap, the one between what an agent is configured to do and what your organization actually intended, remains exactly where it was. I call it the Intent Gap. May 1 does not touch it.
THE GOVERNANCE GAP
In the eighteen months after May 1, a specific pattern will surface inside regulated organizations.
An examiner will ask to see the authorization documentation for an agent action. The organization will produce an Agent 365 audit trail. The examiner will read it. Then she will ask when the organization decided this agent was authorized to take this category of action on this type of account. The audit trail will show the action. It will not show the decision that preceded it.
Most organizations will not have a clean answer, not because they were negligent, but because nobody asked the question before deployment. The agent was configured. It was never formally authorized. Those two things sound nearly identical and operate very differently when a regulator is in the room.
Configuration lives in Copilot Studio. Authorization lives in a document that most organizations have not yet written and a decision-making process most organizations have not yet designed.
One more observation before the next section. The organization in the opening scene had built the best audit infrastructure available. They had done everything the technology required. The examiner's question had nothing to do with the technology. It had everything to do with the organizational decision that preceded the technology. That question exists at every organization running agents right now.
ONE THING TO DO
Before May 1, take one agent currently running in your Microsoft environment. Write down in plain language three things: what is the singular purpose of this agent, where does its authority end, and who in your organization is accountable for the decisions it makes. Not what the Copilot Studio configuration allows. What your organization decided. Put a name next to it. One document, one agent, this week. That is the beginning of the accountability architecture the audit trail cannot provide.
A perfect audit trail that ends at configuration is better than no audit trail. It just means that when the examiner asks the hard question, you know exactly where the gap is. And so does she.