The problem
Every AI agent framework solves mechanism — how models access tools, how agents coordinate, how state persists. None solves execution governance: who is responsible for each step, what structured logic governs decisions, what happens when things go wrong, what constraints apply at each activity, and what audit trail exists.
Organizations deploying agents encounter this gap immediately. An agent that can use tools but has no structured exception handling will retry indefinitely or fail silently. An agent that coordinates with other agents but has no responsibility model produces work nobody is accountable for. An agent that makes decisions but has no separation between deterministic rules and probabilistic judgment uses LLM inference for compliance checks that should be deterministic.
Only 1 in 5 companies report mature AI governance models for autonomous agents. The organizations that combine BPM process discipline with AI agent governance are vanishingly rare exceptions.
The three-layer architecture
Safe, accountable AI agent deployment requires three governance layers, each independently necessary:
| Layer | Scope | Question answered |
|---|---|---|
| Constitutional AI (substrate) | Training-time governance — shapes the model’s character universally | “What kind of agent is this?” |
| Intent Stack (governance context) | Runtime governance — specifies what is delegated, by whom, under what authority, with what constraints | “What is this agent authorized to do, and why?” |
| BPM/Agent Stack (execution structure) | Execution governance — specifies how authorized work gets done with structure, roles, decisions, exceptions, and accountability | “How does this agent execute with process discipline?” |
Each layer answers a question the others cannot. Constitutional AI cannot know an organization’s specific constraints. The Intent Stack cannot specify how to coordinate three agents through a gateway. The BPM/Agent Stack cannot determine whether a delegation was authorized. The layers are orthogonal and complementary.
The bridge nobody has built
The discipline of Business Process Management — codified in the ABPMP BPM CBOK, standardized through BPMN 2.0, DMN 1.0, and CMMN 1.0 (all OMG standards), validated through decades of enterprise operation across banking, healthcare, manufacturing, government, and insurance — solved execution governance long ago.
The AI agent community is reinventing these patterns from scratch, informally, and incompletely. Framework builders add “guardrails” without structured policy linkage. They implement “tasks” without activity attributes. They describe “handoffs” without governed delegation interfaces. They are solving a solved problem, badly.
“No consulting firm, AI platform company, or governance framework has built the bridge from BPM discipline into agent architectures. The Intent Stack and BPM/Agent Stack specifications formalize this bridge.”
Published work
- Intent Stack Reference Model v1.2 — Four-layer governance context architecture for AI agent behavior within organizations. Four layers: Intent Discovery, Intent Formalization, Specification, Runtime Alignment. Published specification, CC BY 4.0. Seven independent mathematical formalizations.
- BPM/Agent Stack Specification v1.1 — Execution governance for AI agent architectures, grounding proven BPM patterns in agent-specific application. 14 clauses covering three-layer architecture, agent species as governance configurations, governed activity models with 21 typed attributes, and structured stitching between governance context and execution structure.
Together the two specifications address seven governance concerns: four for governance context (Intent Discovery, Intent Formalization, Specification, Runtime Alignment) and three for execution governance (Orchestration, Integration, Execution).
BPM's Missing Application: Why BPMN 2.0 and DMN 1.0 Are the Answer to AI Agent Governance
Every major AI agent framework shares a common structural deficiency: the absence of an execution governance layer. The …
Read briefBeyond Guardrails: A Structural Architecture for Governing AI Agent Behavior
The AI agent community's dominant governance concept — guardrails — is architecturally impoverished. A structural …
Read brief