Rob Kline — AI Agent Governance Architect

The governance gap in AI agent deployment is structural, not policy. It requires architectural expertise, not ethics review.

This is not AI-as-novelty applied to governance. This is governance-as-discipline applied to AI.

The problem

Every major AI agent framework — LangChain, CrewAI, AutoGen, Anthropic MCP, OpenAI Agents — solves mechanism: tool access, agent coordination, state persistence. None provides execution governance: responsibility models, structured decision logic, exception handling, governed delegation interfaces, audit trails, or controlled vocabularies.

Organizations deploying agentic AI discover this gap the hard way. Agents that coordinate without accountability. Decisions made by LLM inference that should be deterministic. Exception handling that retries indefinitely or fails silently. “Guardrails” that are policy statements, not structural constraints.

The discipline that solved execution governance — Business Process Management — has been codified in international standards (BPMN 2.0, DMN 1.0, CMMN 1.0), professional bodies of knowledge (ABPMP BPM CBOK), and validated through decades of enterprise-scale operation. But no one has built the bridge from BPM discipline into agent architectures.

What I bring

Decades of BPM/BRM/taxonomy/ontology expertise — not theoretical knowledge, but deep operational experience with process governance, business rule management, decision modeling, and knowledge architecture in enterprise environments.

Formal governance specifications for AI agents — the Intent Stack Reference Model (4-layer governance context architecture, published at intentstack.org under CC BY 4.0) and BPM/Agent Stack (bpmstack.org) (execution governance grounding BPMN 2.0, DMN 1.0, and BPM CBOK in agent-specific application).

The ability to formalize — the Intent Stack includes seven independent mathematical formalizations (deontic logic, network theory, optimal control, differential geometry, category theory, order theory, topological algebra). The BPM/Agent Stack derives authority from established OMG standards. This is not thought leadership — it is specification-grade work.

What this means in practice

I design governance architectures where:

  • Deterministic decisions stay deterministic — DMN decision tables handle compliance classifications, not LLM inference
  • Agent activities carry governance attributes — 21 typed attributes per activity (RACI, SIPOC, Value Stream Mapping, ISO 31000)
  • Boundary constraints are structural, not advisory — agents are architecturally excluded from prohibited actions, not warned against them
  • Process governance is explicit — BPMN 2.0 process models with typed gateways, structured exception handling, and governed decomposition
  • Governance context connects to execution structure — the Intent Stack’s governance primitives stitch to the BPM/Agent Stack’s execution models through a defined interface

The timing

  • EU AI Act enforcement begins August 2, 2026 — organizations need governance architecture, not checklists
  • Only 1 in 5 companies has a mature governance model for autonomous agents
  • The BPM → AI agent governance bridge has zero prior art — no consulting firm, platform vendor, or standards body has formalized it
  • Academic community is actively seeking it: “Agentic BPM” papers published at BPM 2025/2026 conferences describe the gap this work fills
  • 40%+ of Fortune 500 will have Chief AI Officers by 2026; only 1.5% are satisfied with governance headcount

Intent Stack Specification

Published four-layer governance context architecture for governing AI agent behavior. ~104,000 words. CC BY 4.0. Seven independent mathematical formalizations.

BPM/Agent Stack Specification v1.1

Execution governance for AI agent architectures. 14 clauses. Three-layer governance architecture. Agent species as governance configurations. 21-attribute governed activity model.

Governed Reference Implementation (In Development)

Governed security compliance audit: 14-step BPMN-aligned workflow, DMN decision tables with UNIQUE hit policy, read-only boundary constraints, structured escalation routing. Proposed as a demonstration use case for the NIST NCCoE AI Agent Identity and Authorization project.

The timing

Get in Touch