CISO
CIO
Compliance Officer
Board
Industry relevance
Financial Services
Healthcare
Government
Energy
APRIL 9, 2026
NIST is building sector-specific AI governance requirements for critical infrastructure — the voluntary framework phase for regulated industries is ending.
NIST released a concept note on April 7, 2026 for an AI RMF Profile on Trustworthy AI in Critical Infrastructure, published on the NIST AI Risk Management Framework page at nist.gov. The profile is intended to guide critical infrastructure operators toward specific risk management practices when engaging AI-enabled capabilities. This represents the first sector-specific extension of the NIST AI RMF 1.0, originally published in January 2023, beyond the 2024 Generative AI Profile that extended coverage to LLMs and agentic systems. Public feedback on the concept note is being solicited.
GOVERNANCE IMPLICATION
The NIST AI RMF has operated as voluntary guidance since January 2023. A sector-specific profile for critical infrastructure marks a structural shift: sector profiles create the reference baseline that regulatory examiners, procurement evaluators, and auditors use when assessing AI governance maturity. The NIST Cybersecurity Framework followed the same trajectory - voluntary in 2014, effectively mandatory for regulated sectors within four years through examination guidance and contract requirements. Financial services, healthcare, and energy organizations should treat this concept note as a forward compliance signal, not an optional input.
SCENARIO
A regional electric utility deploys AI agents to monitor grid anomalies and automate switching decisions during demand spikes. In April 2026, NIST releases the critical infrastructure AI RMF concept note. By Q3 2026, NERC CIP working groups begin discussing whether to reference the profile in reliability standards. The utility's CISO, who classified the AI deployment as an operational technology initiative outside the information security governance structure, now has six months to either map the deployment to the AI RMF profile or document why it does not apply. The window for proactive governance has closed.
THE GOVERNANCE QUESTION
When NIST begins publishing sector-specific AI governance profiles, at what point does the AI RMF shift from voluntary guidance to the de facto compliance baseline regulators measure regulated organizations against?
CONTROL GAP
Most critical infrastructure organizations have not mapped their AI deployments to the NIST AI RMF GOVERN function, which requires documented ownership structures, risk tolerance thresholds, and explicit accountability lines for AI decisions. A sector-specific profile makes that gap visible to examiners for the first time.
REGULATORY RELEVANCE
NIST Ai RMF
FFIEC
OCC
HIPAA
PRIMARY SOURCE
AI Risk Management Framework
NIST Information Technology Laboratory
April 7, 2026
Read the primary source →(opens in new tab)CONTINUE READING
MARCH 21, 2026
ComplianceMicrosoft published its Zero Trust for AI framework on March 19, 2026 through the Microsoft Security Blog, announcing four new tools: a new AI pillar in the Zero Trust Workshop, updated Data and Networking pillars in the Zero Trust Assessment tool, a new Zero Trust reference architecture for AI systems, and practical patterns and practices for securing AI at scale. The framework extends the three core Zero Trust principles across the full AI lifecycle from data ingestion and model training through deployment and agent behavior. The new AI pillar specifically evaluates how organizations secure AI access and agent identities, protect sensitive data used by and generated through AI, monitor AI usage and behavior across the enterprise, and govern AI in alignment with risk and compliance objectives.
APRIL 18, 2026
AccountabilityMicrosoft published an AI observability checklist for enterprise steering committees on April 16, 2026 via the Microsoft Cloud Blog, authored by Alym Rayani, VP of Marketing for Microsoft Security. The post frames observability as the foundational prerequisite for scaling enterprise AI in 2026 and introduces a refreshed version of Microsoft's governance guide, adding observability as a new pillar. The checklist identifies four questions every steering committee must be able to answer: what agents currently exist across the environment, who owns them, what data and systems they touch, and how they behave. Accenture is cited as a case study, having deployed over 75 AI use cases across industries with 16 in production after implementing centralized observability, reducing AI application build time by 50%.