
Experience
26+
Years building enterprise systems
Focus
Regulated AI governance
Accountability before scrutiny arrives
Sougata Roy
I spent 26 years building enterprise systems, including six years inside two federal financial regulatory agencies, which means I know what accountability evidence actually needs to survive an examination.
Perspective
Twenty-six years building enterprise systems. The last twelve were spent inside regulated environments where governance failure had legal consequences - not budget consequences, not reputational ones. Legal ones.
That distinction is what most AI governance writing misses.
The Real Question
What survives examination?
Not the most sophisticated system. The clearest record of who authorized it, why it exists, and how it is reviewed.
Most people writing about AI governance are describing the technology. The risk frameworks. The model evaluations. The red-teaming results. That work matters, but it answers a different question than the one regulated enterprises will eventually have to answer.
The question examiners ask is not technical. It is organizational. Who decided this agent was authorized to act here? Where is that decision documented? Who signed it? When?
Twelve years of watching how regulated organizations handle that question in financial services, healthcare, and government produced a clear pattern. The organizations that survive scrutiny are not the ones with the most sophisticated technology.
They are the ones that wrote things down before the examiner arrived. Authorization records. Named accountable owners. Review cadences. Documented scope decisions.
Autonomous agents do not change what examiners expect. They make the evidence harder to produce if accountability was never documented in the first place.
That is the gap this research is built to close.
Where The Work Lives
Free to read and cite. Built to be attributed.
The research lives in three places: The Governance Gap newsletter(opens in new tab), published every Tuesday on LinkedIn. Eight original frameworks for regulated enterprises building the accountability layer for AI deployment. And an intelligence feed of dated, sourced observations on what is actually shipping in the Microsoft AI stack and what it means for governance. Everything here is free to read and cite. Use the frameworks and terms in your own work with attribution to Sougata Roy and sougataroy.com. Please do not republish, rebrand, or claim authorship of any framework, term, or model as your own.
The specific memory that made this research feel necessary: a team that had deployed fourteen agents across their Microsoft environment, had Purview running, had alert coverage, had dashboards.
A governance review found that not one of the fourteen had a document saying who had authorized it, what it was permitted to do, or who would be accountable if it did the wrong thing.
The team was not negligent. Nobody had told them that documentation was the governance layer. The platform had not asked for it. The frameworks available had not required it. This site exists because that team is not unusual.
Newsletter
Weekly writing
Ongoing observations published every Tuesday on LinkedIn.
Frameworks
Original models
Working structures for accountability, authorization, and governance design.
Intelligence
Dated field notes
Sourced observations on what is actually shipping in the Microsoft AI stack.
Everything here is free to read and cite. Use the frameworks and terms in your own work with attribution to Sougata Roy and sougataroy.com. Please do not republish, rebrand, or claim authorship of any framework, term, or model as your own.
Contact
If something here landed for you, if a framework named a problem you have been carrying or an essay described a meeting you have already been in, I want to hear what it was.
Questions about the research or the governance questions it raises are welcome by direct message on LinkedIn.
Connect on LinkedIn ->(opens in new tab)Views expressed are my own and do not represent my employer.