Governance Disruption Index: Business & Financial Operations

April 3, 2026

The Deployment Landscape

Anthropic publishes the Economic Index — a research program that uses privacy-preserving analysis of millions of real Claude conversations to measure how AI is being incorporated into economic tasks. Rather than surveying what people say they do with AI, the Economic Index observes what they actually do: mapping anonymized conversations to O*NET occupational tasks, classifying collaboration patterns, and tracking deployment trends across sectors and geographies.

We use the Economic Index as our deployment baseline because it provides the most granular, evidence-based picture available of actual AI usage in the economy. Not projections. Not vendor surveys. Observed behavior at scale, organized by the same Standard Occupation Classification framework the Bureau of Labor Statistics uses.

The Governance Disruption Index measures a different dimension of the same reality. Anthropic’s data tells you where AI is working. Our analysis tells you where that work lacks governance infrastructure.

Here is what the data shows for financial services:

Business & Financial Operations occupations carry a theoretical AI exposure of 94.3% — meaning that nearly all tasks performed in these roles could, in principle, be accelerated by a large language model. The observed exposure is 28.4%. Roughly 30% of the theoretical capability is actually deployed. That ratio will converge. The question is whether governance infrastructure will be in place when it does.

The deployment is not hypothetical. A 2025 MIT Technology Review Insights/EY survey of 250 banking executives found that 70% of institutions are already engaged with agentic AI — 16% through active deployments and 52% through pilots. Forty-four percent of finance teams plan to use agentic AI in 2026, representing a sixfold increase over the prior year.

The 28.4% observed exposure and the 70% institutional engagement are measuring different things — individual task penetration versus organizational commitment — but they tell the same story: financial services is deploying AI agents at scale, and the deployment is accelerating.

What’s Actually Happening

The most visible transformation is in consulting. McKinsey’s Kate Smaje told the Wall Street Journal that engagements once requiring an engagement manager plus fourteen consultants now need an engagement manager plus two or three consultants alongside AI agents. As of January 2026, CEO Bob Sternfels reported the firm has deployed 25,000 AI agents alongside 40,000 human employees, with a target of parity by year-end. McKinsey’s internal tool Lilli, trained on over 100,000 knowledge documents, saves consultants an estimated 30% of research time. Deloitte, PwC, and EY have each deployed proprietary AI platforms across their advisory practices.

Within financial institutions, the deployment spans the autonomy spectrum. Compliance and fraud detection systems operate with high autonomy — automated flagging, blocking, and alert generation with limited human intervention. Financial analysis and advisory remain predominantly augmentation, with AI producing analyses that humans review and act on. The emerging frontier is agentic end-to-end workflows: multi-step processes where AI agents autonomously execute sequences of research, analysis, compliance checking, and report generation.

This creates an autonomy gradient that governance must address differently at each level. Augmentation governance is about information quality — ensuring the human decision-maker receives accurate, relevant analysis. Agentic governance is about delegation, boundaries, authorization, and accountability — ensuring the agent operates within governed constraints when no human is watching each step.

The shadow dimension compounds this challenge. Across all sectors, 68% of employees use AI tools without IT department approval. In financial services, where information barriers between client matters and regulatory compliance obligations are not optional, unauthorized AI tool usage creates exposure that standard IT governance cannot detect. A compliance officer who uses a general-purpose AI assistant to analyze client data has potentially created an information barrier breach that no monitoring system was designed to catch.

Where Governance Is Missing

The Governance Disruption Index assesses governance exposure through five primitives — measurable dimensions of the gap between how AI is deployed and how it is governed. The framework draws on the same structural intuitions as Anthropic’s Economic Index: just as economic primitives capture foundational dimensions of how AI is used, governance disruption primitives capture foundational dimensions of how that usage is ungoverned.

Primitive Scores: Business & Financial Operations

PrimitiveScoreAssessment
Intent Specification Depth1 / 4 (Informal)Extensive compliance policies exist but are not connected to agent configuration
Boundary Propagation Integrity1-2 / 4 (Partial)Regulatory limits exist at organizational level but do not propagate to agent execution
Decision Governance Ratio0-1 / 4 (Ungoverned)Deterministic compliance decisions increasingly delegated to probabilistic LLM inference
Accountability Chain Completeness1-2 / 4 (Nominal)Human accountability frameworks exist but lack operational connection to agent decisions
Runtime Alignment Coverage1 / 4 (Output Review)Compliance monitoring checks outputs, not alignment with governing intent

Intent Specification Depth: Level 1. Financial institutions maintain extensive governance documentation — compliance manuals, risk policies, investment guidelines, fiduciary obligations. But these are human-readable documents. When an AI agent analyzes a client portfolio, it operates from a system prompt (“you are a senior financial analyst”), not from a machine-processable representation of fiduciary duty, client investment objectives, or regulatory constraints specific to this engagement. The gap between “comply with SOX” as a policy statement and a machine-actionable specification of what SOX compliance means for this agent’s specific activities is entirely unbridged. The governance intent exists. The formalization for machines does not.

Boundary Propagation Integrity: Level 1-2. Regulatory limits — position limits, concentration limits, counterparty exposure limits, information barriers between client matters — exist at the organizational policy level. But when an AI agent accesses a trading system, portfolio management platform, or client database, it carries technical access credentials, not governance context. The system knows the agent is authorized to connect. It does not know which constraints govern what the agent may do once connected. When agents delegate to sub-agents (a research agent spawning a data retrieval agent), the parent’s constraints do not propagate. In a governed architecture, boundaries would accumulate monotonically through every delegation interface — each level inheriting and adding constraints, never relaxing them. In current deployments, constraints attenuate or disappear across delegation boundaries.

Decision Governance Ratio: Level 0-1. Whether a transaction triggers AML reporting requirements. Whether a client qualifies for a particular investment product under suitability rules. Whether a trade exceeds risk limits. These are deterministic decisions — they have correct answers derivable from regulations, policies, and client data. They require no judgment. Yet in AI-assisted financial workflows, they are increasingly processed through LLM inference: probabilistic, non-reproducible, and impossible to audit. The same client profile submitted twice might produce different suitability determinations. A regulatory examiner asking “how did the system reach this conclusion?” receives an answer that amounts to “the model thought so.” The deterministic/probabilistic separation — routing rule-based decisions through reproducible decision tables and reserving LLM capability for genuinely judgment-requiring analysis — does not exist in current financial AI deployments.

Accountability Chain Completeness: Level 1-2. Financial services has well-established accountability frameworks: fiduciary duty, three lines of defense, compliance officers, licensed analyst requirements. These frameworks were designed for human decision-makers with human-scale decision volumes. When an AI agent produces a financial analysis that informs an investment recommendation, the licensed analyst is nominally accountable. But the analyst may have no visibility into the agent’s reasoning process, no mechanism to reconstruct how the analysis was produced, and no practical ability to verify every output at the volume AI enables. The accountability exists on paper. The operational connection between the accountable human and the agent’s actual decision process does not.

Runtime Alignment Coverage: Level 1. Compliance monitoring in financial services is mature — trade surveillance, limit monitoring, regulatory reporting validation. But this monitoring assesses outputs: did the trade exceed position limits? Did the report contain required disclosures? It does not assess alignment with governing intent: is this trading strategy consistent with the client’s declared investment purpose? Is this research analysis serving the portfolio objective the client authorized? An AI agent that produces compliant outputs serving the wrong purpose — technically legal trades that don’t advance the client’s investment objectives — passes every compliance check and fails every alignment test. No financial institution currently monitors AI agent behavior against governing investment intent in real time.

The Incident Record

The governance gap in financial services is not hypothetical. Enforcement actions from the past eighteen months demonstrate the pattern — and reveal that the gap predates AI agents, rooted in the same structural failure that AI deployment now compounds.

Two Sigma — $90M for Model Governance Failure (January 2025). The SEC fined Two Sigma $90 million after an employee made unauthorized changes to quantitative trading model parameters without supervisory review — changes that caused $165 million in client losses, which Two Sigma also repaid. The violation was not in the models themselves. It was in governance: the database holding parameters for live-trading models had no access controls. Any staff member had unrestricted read/write access. Organizational policies about model change authorization existed on paper but did not reach the system level in enforceable form. This is Boundary Propagation Integrity failure: governance intent that never became governance infrastructure.

Brex Treasury — $900K for Automated Compliance Failure (August 2024). FINRA fined Brex Treasury $900,000 for AML deficiencies including reliance on an automated identity-verification system that was “not reasonably designed.” The system contributed to hundreds of potentially fraudulent accounts attempting over $15 million in transactions. The failure was not that automation was used — it was that a deterministic compliance decision (does this identity verification meet regulatory standards?) was delegated to an automated system without adequate governance of the system’s design or performance. This is Decision Governance Ratio failure: a decision that demands deterministic rigor processed through a system with no mechanism to ensure that rigor.

Interactive Brokers — $475K for Algorithm Oversight Gap (October 2024). FINRA fined Interactive Brokers $475,000 after a flawed securities lending algorithm miscalculated available share inventory, creating approximately $30 million in segregation deficits across more than 800 instances. The firm also allowed an unregistered person to oversee changes to the algorithm. This is Accountability Chain Completeness failure: the person responsible for the algorithm was not qualified, and supervisory systems for the algorithm’s output were inadequate.

The Compound Risk. These cases involve traditional automated systems — quantitative models, identity-matching algorithms, inventory-calculation software. None were AI in the contemporary sense. But they demonstrate the structural failure that AI agents will amplify: governance infrastructure designed for human-speed, human-scale operations applied to automated systems operating at machine speed and scale. As AI agents take over more of the decisions that humans previously made in financial services, this governance gap does not close. It compounds. Every agent deployment without corresponding governance infrastructure adds to a governance debt that accrues interest at the rate of machine-speed decision-making.

The Infosys Knowledge Institute’s 2025 survey of 1,500 executives at large enterprises ($1B+ revenue) found that 95% of those using enterprise AI experienced at least one AI-related incident. Only 2% of companies met responsible AI maturity standards across governance, risk, trust, and sustainability. The gap between deployment velocity and governance readiness is not a financial services problem alone — but in financial services, where regulatory consequences compound operational failures, the stakes are highest.

What Governed Deployment Looks Like

The governance gap is not inevitable. The constructs needed to close it exist — formalized in the Intent Stack reference model (seven governance layers for AI delegation) and the BPM/Agent Stack specification (execution governance for agent workflows). Applied to financial services, these constructs transform specific failure modes into governed operations.

Portfolio Analysis: From Compliance Check to Intent Alignment. An AI agent currently receives a system prompt and access to client data. It produces analysis. A human reviews the output for quality. Under governed deployment, the agent operates within a specification derived from the client’s investment purpose — discovered through formal intent elicitation, not inferred from a prompt. Risk tolerance, regulatory constraints, and investment policy restrictions are machine-enforced boundaries that accumulate through every delegation interface. Suitability determinations and concentration limit checks run through deterministic decision tables — reproducible, auditable, consistent. The licensed analyst remains accountable and now has visibility into the agent’s decision process through a governed audit trail. Runtime alignment monitoring assesses whether the analysis serves the declared investment purpose, not just whether it stays within compliance limits.

Compliance Monitoring: From Probabilistic to Deterministic Where It Matters. An AI agent currently monitors transactions using LLM-based pattern recognition, producing probabilistic flags that cannot be reproduced or explained to a regulator. Under governed deployment, deterministic compliance rules — reporting thresholds, sanctions screening, customer due diligence requirements — execute through decision tables that produce the same output for the same input every time. LLM capability is reserved for genuinely judgment-requiring analysis: unusual pattern detection, contextual risk assessment, emerging threat identification. Every flagging decision carries an attribution trail linking it to either a deterministic rule (auditable) or a probabilistic assessment with confidence scores (explainable). Escalation events define structured handoff when the system encounters conditions beyond its governed scope — it stops and escalates rather than guessing.

Multi-Agent Advisory Workflows: From Ad Hoc to Orchestrated. Multiple AI agents currently handle different aspects of client engagements with informal handoffs and no orchestration governance. Under governed deployment, each agent operates in a defined responsibility swimlane. Information barriers between client matters are governance-layer boundaries, not just access controls — a subprocess serving Client A cannot access Client B’s data because the boundary is enforced at the governance layer regardless of technical access. Milestone gates require human authorization at phase transitions. The governing investment purpose translates through every delegation interface, so the research agent, analysis agent, compliance agent, and reporting agent all operate within the same governed intent.

The Regulatory Landscape

Financial services faces the most active AI regulatory landscape of any sector. The EU AI Act classifies financial services AI as high-risk, with requirements effective August 2026 — less than five months away. The SEC and FINRA continue active oversight of algorithmic and model-based trading systems under existing frameworks. OCC guidance (SR 11-7) on model risk management applies to AI models but was written before agentic AI existed. Basel III/IV capital requirements implicitly cover model risk but do not address multi-agent composition.

The structural challenge: existing regulations require accountability, auditability, and compliance — outcomes that governance infrastructure must deliver. But they do not specify how those outcomes apply to AI agents making or influencing decisions at machine speed across multi-step workflows. The regulatory intent is clear. The translation into machine-processable governance is the gap.

Gartner predicts that over 40% of agentic AI projects will be canceled by 2027 due to governance and control inadequacies. In financial services, where regulatory consequences compound project failure — where a canceled project may also trigger an enforcement action for the ungoverned decisions the agents already made — the cost of governance debt is not just the project investment. It is the regulatory exposure accumulated during the ungoverned deployment period.

Governance Disruption Assessment

PrimitiveFinancial Services ScoreWhat It Means
Intent Specification Depth1 / 4Governance intent exists in human-readable form but is not formalized for agents
Boundary Propagation Integrity1-2 / 4Organizational constraints do not reach agent execution in enforceable form
Decision Governance Ratio0-1 / 4Deterministic compliance decisions processed through probabilistic systems
Accountability Chain Completeness1-2 / 4Accountability exists on paper but lacks operational connection to agent behavior
Runtime Alignment Coverage1 / 4Outputs monitored, but intent alignment is not

Financial services scores between 4 and 7 out of 20 across the five primitives — among the lowest of any sector we assess, despite having the most mature human governance infrastructure. The paradox is the point: governance maturity for human practitioners does not transfer to AI agents. The concepts exist. The translation mechanism does not.

Governance-Deployment Velocity Ratio: Diverging. Financial services has the strongest existing governance infrastructure of any sector — decades of regulatory compliance, model risk management, three lines of defense. But this infrastructure was designed for human practitioners making decisions at human speed. Agent deployment velocity — 70% institutional engagement, McKinsey restructuring entire engagement models, a sixfold increase in planned agentic AI adoption among finance teams — is outpacing the translation of that governance into machine-processable form.

The trajectory is clear: without a systematic translation mechanism, financial services faces a governance debt that compounds with every new agent deployment. The governance concepts are already understood. The frameworks already exist in human form. What is missing is the formalization layer — the mechanism that takes decades of financial governance wisdom and makes it machine-operable.

That is precisely what the Intent Stack and BPM/Agent Stack were designed to provide.

What Comes Next

This analysis was produced using publicly available information — deployment data from Anthropic’s Economic Index, SEC and FINRA enforcement actions, industry surveys from MIT Technology Review, EY, and the Infosys Knowledge Institute, and governance gap assessment using the Intent Stack and BPM/Agent Stack analytical frameworks.

Imagine what this analysis looks like with your organization’s data: your agent deployment inventory, your governance policies, your delegation structures, your compliance requirements mapped construct by construct against the five governance disruption primitives.

The gap between your human governance maturity and your AI governance readiness is measurable. The constructs to close it are specific. The regulatory timeline is fixed.

Contact PracticalStrategy.AI →