CISO
Enterprise Architect
Compliance Officer
Industry relevance
Financial Services
Healthcare
Government
MARCH 29, 2026
Every agent added to Copilot is a new prompt-injection surface — Microsoft's own guidance says tools and knowledge can pull from untrusted sources and influence behavior.
Microsoft’s current guidance on extending Microsoft 365 Copilot with agents explicitly warns that tools and knowledge can pull from untrusted sources and influence behavior. The implication is clear: every custom agent added to Copilot is also a new prompt-injection and tool-governance surface.
GOVERNANCE IMPLICATION
Microsoft's own documentation for extending Copilot with agents explicitly acknowledges the prompt-injection risk from tools and knowledge sources that pull from untrusted content. This is not a theoretical concern surfaced by third-party researchers — it is a first-party acknowledgment embedded in the official product guidance. For regulated organizations, this means the approval process for adding any custom agent to Copilot should include a documented review of every data source and tool the agent can access, with a specific assessment of whether any source can be influenced by untrusted external content.
SCENARIO
A legal team at a financial services firm deploys a Copilot agent that monitors regulatory news feeds and summarizes relevant updates into a weekly briefing document. The agent's knowledge source includes public RSS feeds from three industry news sites. An adversary plants a prompt injection payload in an article on one of those sites. The next time the agent processes the feed, it includes the injected instruction in its output and forwards it to the team's SharePoint library where other Copilot agents use it as a grounding source. The original agent had no content safety guardrail on external knowledge sources.
THE GOVERNANCE QUESTION
What review process should exist before a Copilot-connected agent is allowed to use tools or knowledge sources that can be manipulated by untrusted content?
CONTROL GAP
No standardized review process exists for assessing the prompt-injection risk of knowledge sources and tools before a custom agent is added to the organizational Copilot deployment. Most agent approval workflows focus on data access permissions, not on the trustworthiness of the content the agent ingests.
REGULATORY RELEVANCE
NIST Ai RMF
SEC Cyber
FINRA
OCC
FFIEC
PRIMARY SOURCE
Extend Microsoft 365 Copilot with agents
Microsoft
February 27, 2026
Read the primary source →(opens in new tab)CONTINUE READING
MAY 1, 2026
Identity DataMicrosoft confirmed on May 1, 2026 that Conditional Access for agents is generally available for delegated access agents, those that act on behalf of a licensed human user. Conditional Access for own-access agents, those that operate with an independent identity not tied to a user session, remains in public preview. Microsoft Entra ID Protection applies dynamic risk evaluation to both agent and user identity signals and feeds those signals into Conditional Access policies. The GA and preview split means the two agent classes operate under materially different access control regimes at Agent 365 launch.
MARCH 27, 2026
Identity DataMicrosoft Purview continues to be presented as a portfolio spanning data governance, security, and compliance, including controls such as information protection, DLP, investigations, and compliance tooling. In practice, that means Copilot readiness is inseparable from whether Purview-classification and policy work has actually been done.
MARCH 25, 2026
Identity DataMicrosoft Entra Agent ID extends Entra security capabilities to AI agents for build, discover, govern, and protect workflows. It applies conditional access policies, identity governance, identity protection risk signals, and network controls to agents. It is part of Agent 365 and currently requires a Microsoft 365 Copilot license with Frontier enabled.