Skip to main content
Intelligence | May 12, 2026 | Microsoft Publishes Five-Level DDoS Resilience Maturity Framework for Consume...

Federal AI Implementations Are Building Governance Around the Wrong Threat Model

Long-form analysis, Enterprise AI Governance

Federal technology programs deploying AI systems are having the right governance conversation and arriving at the wrong conclusion. The instincts are sound. The threat model is not.

The working assumption in most federal AI implementation guidance is that the primary risk is the model absorbing institutional data and changing its own behavior over time. That assumption is shaping where controls get placed, around data ingestion, model custody, and what the AI is permitted to see. The two major OMB memoranda published in April 2025, M-25-21 and M-25-22, both treat model weights as the primary asset to protect. Neither document addresses what happens to an AI system after it is deployed and operating.

Standard commercial LLM deployments, including Azure OpenAI configurations most federal programs are running, do not retrain on user inputs during operation. Model weights stay fixed. What changes is the context the model receives, the documents it retrieves, and the instructions it is given at runtime. The risk is not a model that learns agency data and becomes something different. The risk is context poisoning, retrieval manipulation, and prompt injection, attacks that exploit what the model is given, not what it has learned.

NIST identified this distinction explicitly. In January 2026, NIST published a Request for Information on securing AI agent systems that separates inference-time threats, including indirect prompt injection, from training-time threats including data poisoning. That document treats them as distinct risk categories requiring distinct controls. The GAO acquisition report published in April 2026 reviewing AI procurements at DOD, DHS, GSA, and the VA does not make that distinction. It warns about vendors training models on flawed data and model performance degrading over time, framing that reflects the same misunderstanding the NIST RFI was designed to correct.

The gap between what NIST published in January 2026 and what federal acquisition guidance is actually building controls around is where federal AI deployments are currently exposed.

The second gap is at the authorization decision point. For every step where an AI system selects, ranks, rejects, or routes, a durable record needs to exist: what prompt was active, what model version was running, who authorized the execution, and under what documented boundaries. Most federal AI implementations are producing capability documentation, what the system can do, rather than authorization documentation, what it was permitted to do, who decided that, and when.

Federal environments have more mature systems of record than most private sector organizations. Delegated authority frameworks, procurement authorization chains, and examination evidence requirements are not new problems in government technology. The accountability infrastructure exists. It was not designed with autonomous agents in mind, and the mapping from procurement authorization to runtime agent authorization has not happened yet inside most programs.

When an oversight body eventually asks not whether an agency acquired an AI capability but who authorized that capability to act, the procurement record will not answer the question. The platform audit log will not answer the question. The authorization record will answer the question, and in most federal AI deployments right now, that record does not exist.

That is not a technology problem. It is a documentation problem that federal environments are better positioned than most to solve, if the right controls get placed in the right location before the question gets asked.

ABOUT THIS ESSAY

This analysis draws on publicly available federal guidance documents including OMB M-25-21, OMB M-25-22, NIST AI Agent Security RFI (January 2026), and GAO Report GAO-26-107859 (April 2026). Primary sources are cited in the text.

All essays