CISO
CIO
Enterprise Architect
Legal
Industry relevance
Financial Services
Healthcare
Government
MARCH 9, 2026
Microsoft and Anthropic built a multi-step AI work system that runs over hours or days — intent drift over time is a new accountability gap no existing governance framework addresses.
Microsoft announced Copilot Cowork as part of Copilot Wave 3, developed in close collaboration with Anthropic. It brings technology powering Claude Cowork into Microsoft 365 Copilot to support extended, multi-step work sessions that operate across time rather than within a single prompt exchange.
GOVERNANCE IMPLICATION
Copilot Cowork's multi-step, time-extended work model introduces a form of AI delegation that existing governance frameworks were not designed for. When an AI system executes a task over hours or days, the human's original authorization becomes temporally distant from the agent's actions. Intent drift — the accumulation of small reasoning deviations over a long task — can result in an output that differs materially from what the employee approved at the start. For regulated organizations where work products carry legal or fiduciary weight, the accountability question is not who approved the task at 9am but who owns the outcome at 11pm when the agent completes it.
SCENARIO
A senior analyst at an asset management firm authorizes a Copilot Cowork session to research and draft a market risk report, pulling from approved internal and external data sources. The session runs overnight across 14 steps. At step 11, the agent encounters ambiguous data and makes an interpretive decision the analyst would not have approved. The final report contains an error in a risk calculation that influences a rebalancing decision. When the error is discovered, the question is whether the analyst is accountable for the output of a system they authorized but did not supervise overnight.
THE GOVERNANCE QUESTION
When an AI system makes decisions and takes actions across hours or days on behalf of an employee, intent drift becomes a structural risk rather than a theoretical one. What is your organization's documented human review checkpoint for a multi-step AI workflow — and who owns the outcome if the agent's reasoning over time diverges from what the employee originally authorized?
CONTROL GAP
No governance framework exists for human review checkpoints in multi-step AI work sessions extending beyond a single human session. Acceptable use policies address what AI can do, not how extended AI work is monitored, interrupted, or reviewed when the authorizing human is no longer present.
REGULATORY RELEVANCE
SEC Cyber
FINRA
NIST Ai RMF
OCC
PRIMARY SOURCE
Introducing the First Frontier Suite built on Intelligence + Trust
Microsoft
Read the primary source →(opens in new tab)CONTINUE READING
APRIL 1, 2026
MicrosoftMicrosoft’s current product guidance keeps Microsoft 365 Copilot and Microsoft 365 Copilot Chat in distinct operating categories. One is the licensed work-grounded layer across Microsoft 365 data and apps; the other is the broader chat entry point that can add agent capability without requiring the same license path.
MARCH 31, 2026
MicrosoftMicrosoft now describes Microsoft 365 Copilot Chat as secure AI chat that adds pay-as-you-go agents, plus features such as Copilot Pages, file upload, and image generation. That makes chat not just a conversational layer, but the likely first point of AI contact for many users who do not yet hold a full Microsoft 365 Copilot license.
MARCH 30, 2026
MicrosoftThe current Microsoft Copilot Studio documentation frames the product as more than a chatbot builder. It now centers agents, knowledge sources, tools, agent flows, MCP servers, publishing to Teams and Microsoft 365, and performance analysis. That widens the operational surface area significantly.