CISO
Compliance Officer
Legal
Board
Industry relevance
Financial Services
DECEMBER 9, 2025
FINRA's 2026 oversight report formally identifies AI agent autonomy and auditability as specific risks requiring supervisory frameworks — not just AI policies.
FINRA's 2026 Annual Regulatory Oversight Report names GenAI agents as a new trend requiring explicit supervisory treatment. It identifies five risk dimensions specific to AI agents: autonomy without human validation, scope and authority beyond user intent, auditability complications in multi-step reasoning, data sensitivity exposure, and domain knowledge gaps in industry-specific contexts.
GOVERNANCE IMPLICATION
FINRA's identification of five specific AI agent risk dimensions in its 2026 Annual Regulatory Oversight Report establishes the supervisory framework expectation for broker-dealers. FINRA Rule 3110 supervisory obligations apply to AI agent actions in the same way they apply to human actions — the agent's autonomy does not transfer the supervisory obligation away from the registered firm. For broker-dealers deploying AI agents in any workflow that touches customer communication, trading, or compliance functions, the 2026 oversight report creates a documented standard against which FINRA examinations will assess supervisory adequacy.
SCENARIO
A broker-dealer deploys an AI agent to assist registered representatives in drafting client communication. The agent operates autonomously, drafting and in some cases sending communications on behalf of the representative. A FINRA examination in 2026 asks the firm to demonstrate how it supervises AI agent-generated communications under Rule 3110. The firm's supervisory procedures manual addresses email review but makes no mention of AI-generated communications. The gap is cited as a deficiency.
THE GOVERNANCE QUESTION
FINRA states that AI agents lack the tacit knowledge, transparency, and predictability that traditional supervisory and governance practices assume. Broker-dealer supervisory obligations under FINRA Rule 3110 do not pause because an AI agent is performing the action. Which of your agent deployments has a documented supervisory review process that satisfies Rule 3110 — and who is the registered person whose name is on that supervision record?
CONTROL GAP
Most broker-dealer supervisory procedures manuals were written before AI agents existed as a production capability. Rule 3110 supervisory frameworks have not been updated to define what adequate supervision of an AI agent looks like in customer-facing or compliance-sensitive contexts.
REGULATORY RELEVANCE
FINRA
SEC Cyber
NIST Ai RMF
PRIMARY SOURCE
2026 FINRA Annual Regulatory Oversight Report: GenAI Continuing and Emerging Trends
FINRA
Read the primary source →(opens in new tab)CONTINUE READING
APRIL 7, 2026
Regulated IndustriesOn April 7, 2026, NIST's Information Technology Laboratory published a concept note launching development of an AI Risk Management Framework Profile specifically for Trustworthy AI in Critical Infrastructure. The profile targets AI deployed across Information Technology, Operational Technology, and Industrial Control Systems in sectors including energy, water, healthcare, and financial services. NIST researchers Raymond Sheh and Martin Stanley are leading the effort. NIST is establishing a Community of Interest for stakeholder input through seminars, working sessions, and requests for information. The profile aims to harmonize definitions across AI, critical infrastructure, and cybersecurity domains and provide actionable risk management guidance for operators at any level of AI maturity.
JANUARY 27, 2026
Regulated IndustriesFINRA published observations from its risk monitoring engagement with member firms in January 2026. Firms are moving cautiously on customer-facing AI agents while moving faster on back-office automation. FINRA encouraged firms to proactively engage as their agentic AI strategies develop and noted it will continue monitoring and sharing findings with the industry and fellow regulators.