Skip to main content
Intelligence | May 12, 2026 | Microsoft Publishes Five-Level DDoS Resilience Maturity Framework for Consume...

CORE CONCEPTS

The implicit belief that someone else owns what your AI system does — whether you built it or licensed it from someone else.

Every enterprise has it. Most have never examined it. Regulators in financial services, employment, and consumer products have started saying explicitly that the vendor's terms of service are not a defense.

Free to read and cite with attribution to Sougata Roy and sougataroy.com. Do not republish, rebrand, or claim authorship of any framework, term, or model as your own.

THE PROBLEM

The assumption that nobody made but everyone is operating under

The CISO approved the AI vendor. Procurement signed the contract. The business unit configured the integration. The developer tested the outputs. Nobody in that chain formally decided who is accountable when the agent produces a harmful result.

They did not need to decide. The assumption was already in place. The vendor built the model, so the vendor is responsible for its behavior. The platform enforces the access controls, so the platform is responsible for what the agent can reach. The business unit owns the use case, so the business unit is responsible for the outcomes.

Every one of those assumptions is wrong. Regulators, auditors, and plaintiffs who are now arriving at these organizations have made that very clear.

The Accountability Assumption is the implicit organizational belief that accountability for an AI system's decisions resides with the team that built it, the vendor that supplied it, the procurement process that licensed it, or the platform that hosts it, rather than with the organization that decided to deploy it. It is not a deliberate choice. It is what fills the space when no deliberate choice is made.

INSIDE THE ORGANIZATION

The governance question

When an AI system operating on your behalf produces a consequential decision that causes harm, which person in your organization accepted accountability for that outcome before the agent went live, and can you produce the record showing they did?

THE CONCEPT

Three versions of the same assumption

The Accountability Assumption appears in three distinct configurations in enterprise AI deployments. Each one is a different answer to the same unasked question. Each one leaves the deploying organization exposed when the unasked question gets asked by an examiner.

VENDOR-CUSTOMER GAP

The vendor will handle it

The deploying organization assumes the AI vendor is responsible for the agent's behavior. The vendor's terms of service have already answered that question. The answer is almost always no. Vendors disclaim liability for outputs used in consequential decisions. The organization that deployed the agent and directed its use owns the accountability for how that use affected real people.

Named case

Kistler et al. v. Eightfold AI Inc. was filed in California state court in 2026 and later removed to federal court. Plaintiffs alleged Eightfold violated the Fair Credit Reporting Act by compiling applicant data without necessary disclosures, consent, or a mechanism to report inaccurate information. Mobley v. Workday, Inc., Case No. 3:23-cv-00770-RFL, U.S. District Court for the Northern District of California, before Judge Rita F. Lin. Filed February 21, 2023. The case alleges that Workday's AI screening tools discriminated against job applicants based on race, age, and disability before any human review occurred. Workday's litigation position is that it provides technology and its customers make the hiring decisions. Many employers' position is that the platform handled the screening accountability. On May 16, 2025, the court granted preliminary collective certification. On February 17, 2026, the court authorized notice to every person aged forty or older who applied for jobs through Workday's platform since September 24, 2020. The authorization record showing who formally owned accountability for what the system decided did not exist at either party.

THREE-PARTY REGULATED-ENTITY GAP

The platform will handle it, and so will the vendor

In regulated industries, the Accountability Assumption compounds across vendor, platform, and regulated entity. Each assumes the others are managing the risk. Regulatory alerts increasingly name the answer: the regulated entity must determine whether vendor and platform integrations expose its own records, customers, and obligations.

Named case

FINRA issued a Cybersecurity Alert on the Salesloft Drift AI supply chain attack. FINRA told member firms to determine whether they or their vendors were impacted, disconnect affected integrations, rotate exposed credentials, review audit logs, and apply stricter access controls for third-party applications.

SUPPLY CHAIN GAP

The upstream vendor will handle it

When an organization deploys AI through a third-party AI vendor, it inherits the governance posture of that vendor's own infrastructure whether it evaluated that posture or not. The assumption that supply chain risk belongs to the vendor fails the moment a breach exposes the deploying organization's data.

Named case

In March 2026, Mercor confirmed a security incident linked to a compromise of the open-source LiteLLM project. Public reporting and filed litigation alleged a large data exposure connected to the supply chain compromise.

THE LEGAL REALITY

What the contract actually says

Before signing any AI vendor contract, review the liability section specifically for outputs used in consequential decisions. In many enterprise AI contracts currently in the market, the vendor disclaims liability for model outputs, agent decisions, and downstream consequences of use. This is not a hidden provision. It is the standard position.

The McDonald's McHire platform, operated by Paradox.ai, exposed personal data linked to approximately 64 million job applicant records due to credential failures that included an administrator account protected by the password "123456." McDonald's procurement and security functions had not evaluated the governance posture of an AI vendor handling tens of millions of applicants. The data and the regulatory exposure belonged to McDonald's regardless of what Paradox.ai's contract said about its own liability.

The Accountability Assumption survives precisely because it is never examined against the actual contract terms. The moment an organization reads its AI vendor agreements carefully, the assumption collapses and the accountability gap becomes visible. At that point, the organization is either working to close it deliberately or waiting for an event that closes it under external pressure.

THE SOLUTION

Replacing the assumption with a formal assignment

The Accountability Assumption is not closed by a policy statement or a vendor contract addendum. It is closed by a specific organizational design decision made before the agent goes live: naming a Consequence Owner who has explicitly accepted accountability for the agent's behavior.

A Consequence Owner is not the developer who built the agent. Not the IT team that approved the integration. Not the vendor who supplied the model. The Consequence Owner is the business owner, the named individual who can answer "yes, this agent should exist and here is why" if an examiner calls. They know they own it. Their name is in the authorization record. The record predates the agent's production operation.

That single requirement, a named Consequence Owner whose acceptance of accountability is documented before deployment, does more to close the Accountability Assumption than any policy document or vendor review process. It converts an implicit assumption into an explicit organizational decision. Explicit decisions survive examination.

The test

The test: identify any AI system operating on your behalf, whether built internally or licensed from a third-party vendor. Find the authorization record showing who accepted accountability for its behavior before it went live. If no such record exists, the Accountability Assumption is in place for that agent, and it has been in place since the day it was deployed.

THE FRAMEWORK

The Deployment Accountability Map in practice

The Deployment Accountability Map is the operational framework that makes the Accountability Assumption visible and addressable. It maps the three tiers that exist in every AI deployment: the AI provider, the control infrastructure, and the deploying organization. It defines exactly what each tier is and is not responsible for.

The provider tier covers model behavior within published specifications and platform security within the terms of the service agreement. It does not cover the deploying organization's intent, data access decisions, or human review requirements.

The control infrastructure tier enforces what the organization configures. It is not accountable for organizational decisions the organization has not made.

The deploying organization tier owns everything the other two tiers do not cover, and that gap is always larger than organizations expect when they read their vendor terms of service for the first time. The Deployment Accountability Map is the tool for making that gap visible before it becomes a finding.

RELATED CONCEPTS

Where the Accountability Assumption sits in the governance structure

The Accountability Assumption does not operate in isolation. It is the implicit belief that enables every other governance failure to persist unaddressed.

Governance Debt accumulates because no named owner with formal accountability is watching the ratio of governed to ungoverned agents. The Accountability Assumption is what makes that inattention feel safe until an external event makes it cost.

Intent Architecture is the organizational design work that replaces the Accountability Assumption with a formal accountability assignment. A complete Intent Architecture record names the Consequence Owner. A deployment without one is a deployment where the Accountability Assumption remains in place.

The Intent Gap widens when no one with formal accountability is monitoring the distance between what the agent was intended to do and what it actually does. The Accountability Assumption removes the person who would have been watching.

Agent Sprawl compounds the Accountability Assumption across every ungoverned deployment. Each new agent deployed without a named Consequence Owner is another instance of the assumption operating simultaneously. At scale, the assumption becomes organizational policy by default.

WHAT GOOD LOOKS LIKE

When the assumption has been replaced

Every agent deployed in the environment has a named Consequence Owner who knows they own it, whose name is in the authorization record, and who was named before the agent entered production operation. The authorization record predates the agent's first live action. The Consequence Owner can describe the agent's authorized scope, its explicit prohibitions, and the escalation path without looking at a technical configuration.

When a regulatory examiner asks who is accountable for a specific agent's behavior, the answer comes from the authorization record, not from a conversation about whose team built it or whose budget funded it. The accountability assignment was made before the question was asked.

That is the standard. The Deployment Accountability Map is the tool for getting there.