Agent Governance.
For the mid-market.
By Lindsay Hiebert · Founder · CISSP
Agent Governance is the category Google formally named at Cloud Next '26 — covering the agents, MCP servers, A2A discovery chains, and shadow AI tools that don’t fit a traditional API gateway, SIEM, IAM system, or GRC checklist. It’s the layer above the Wiz / Palo Alto AI-SPM products, above the Vanta / Drata compliance dashboards, and above the OneTrust / BitSight vendor-risk platforms.
SanctumShield is one of very few products — and likely the only one purpose-built for the mid-market layer of this category — organizations of 50–2,000 employees that don’t have dedicated platform engineering or security teams. We produce the artifact your board, your auditor, your regulator, and your cyber insurance underwriter actually consume — not the engineering output a SecOps dashboard generates.
What “Agent Governance” covers — and why it’s a superset.
Agent Governance is the superset that subsumes the older “AI governance,” “shadow AI,” and “AI-SPM” framings. It treats the AI application — not the individual agent or the underlying model — as the unit of management, and it explicitly covers the four risk surfaces below.
The category exists because Google’s own architecture team made this acknowledgment in public:
“We don’t want to treat agents like APIs. That’s what traditionally has been done. The behavior is more probabilistic. Their reasoning paths are non-deterministic.”
That’s a Google principal engineer publicly stating that every legacy governance tool — every API gateway, every IAM system, every SIEM correlation rule, every GRC checklist — is structurally insufficient for agent governance. The category exists because the existing categories don’t fit.
SanctumShield is the first SaaS purpose-built for mid-market Agent Governance — built for the 50–2,000-employee organizations the category leaders (Wiz, Palo Alto, Cisco) and the consultancies (Big 4, boutique advisories) have consistently underserved. Designed and operated by a CISSP-credentialed founder with deep domain expertise across AI, network security, and zero trust. See actual rendered artifacts before you decide.
Four surfaces, one category.
Agent Governance is fundamentally about four risk surfaces. SanctumShield covers the first one excellently today and is extending coverage to the other three via the Q3 2026 finding- catalog expansion.
| Surface | What it is | Introduced by | SanctumShield coverage |
|---|---|---|---|
| Shadow AI SaaS | Employees using external SaaS AI tools (ChatGPT, Claude, Cursor, Perplexity, Gemini) | Every employee | Excellent — 64-domain endpoint registry + 60+ pre-rated tools catalog |
| Deployed Agents | Agents the organization built or installed (Gemini Enterprise custom agents, OpenAI Agent SDK, Anthropic ADK, LangChain production endpoints, Microsoft Copilot Studio) | Engineering, IT, embedded SaaS vendors | Q3 2026 expansion — covered via the Deployed Agent Inventory finding type |
| MCP Servers | MCP (Model Context Protocol) endpoints registered to agents — used by Cursor, Claude Desktop, Zed, Gemini CLI, Warp | Developers, sometimes silently | Q3 2026 expansion — covered via the MCP Server Trust Chain Risk finding type |
| A2A Discovery Chains | Agent-to-agent (A2A) discovery and delegated calls between multi-agent frameworks (ADK, CrewAI, AutoGen, LangGraph) | Automated, propagating | Q3 2026 expansion — covered via the A2A Discovery Governance Gap finding type |
The reason agent governance needed its own category.
Each piece of your existing security stack was built for a specific kind of risk surface. Agents don’t fit any of them. Here’s the honest accounting, category by category.
Built for deterministic REST/HTTP APIs. They parse HTTP headers; agent traffic lives in JSON-RPC bodies (MCP, A2A). No semantic policy at the prompt level.
Built for human and long-lived service-account identities. Agents are first-class principals with ephemeral, scoped, cryptographically attested identities — a different model.
Records events for incident correlation. Doesn’t produce the regulation-anchored governance artifact a board, auditor, or carrier needs to evaluate AI exposure.
Checklist-automation. Tracks whether a control exists. Not generative — doesn’t produce the AUP, the risk report, or the verification artifact. The SOC 2 auditor will ask about AI governance; the GRC dashboard has a checkbox, not an answer.
The artifact layer above the platform layer.
Google ships an excellent platform-engineering answer. Wiz ships an excellent SecOps-automation answer. SanctumShield sits above both, producing the artifacts that translate technical posture into executive accountability.
Five pillars (Registry, Gateway, Identity, Policies, Observability). Operates agents on Google Cloud at runtime. Requires platform engineering team.
Buyer: Platform engineering
Red / Blue / Green agents — finds, investigates, and remediates exploitable vulnerabilities in deployed agents. Engineer-facing artifacts (PRs, forensic verdicts, pull-request remediations).
Buyer: DevSecOps team
Produces the regulation-anchored AI Acceptable Use Policy, board-ready Executive Risk Report, verification URL, and board memo. The artifacts auditors, boards, regulators, and cyber insurance underwriters consume.
Buyer: CISO, GC, Compliance, Board
A mature 2026 AI risk program uses all three layers. They are complementary, not competing. SanctumShield is one of very few products — and likely the only one priced and built to be self-serve at the mid-market tier.
Mid-market organizations.
50 to 2,000 employees.
- → 50–2,000 employee organizations
- → 1–3 person security or IT teams
- → CISOs / IT Directors / GCs facing a board-pressure conversation about AI governance this quarter
- → Healthcare SaaS, fintech, B2B SaaS, RIAs, and professional services firms
- → Buyers without dedicated platform engineering or SRE teams
- → Not built for: Fortune 500 organizations seeking a centralized, top-down platform-engineering-led AI governance program — evaluate Google Agent Platform or Wiz AI-SPM directly
- → Also built for: individual departments and lines of business inside larger organizations (2,000+ employees) where AI use is decentralized and unit-level governance is needed. A 500-person clinical operations group, a 300-person product engineering line of business, or a 200-person legal team can each adopt SanctumShield independently at $99/month per scope.
- → Organizations with an existing Wiz / Palo Alto / Cisco AI Defense investment that already covers the runtime layer
- → AI labs training their own foundation models (model governance is a different problem)
- → Use cases where you need “protect-the-running-agent” rather than “govern-the-organization’s-AI”
Built, designed, and developed specifically for organizations that don’t have dedicated platform engineering or security teams.
What happens when the developer is an agent?
The policy chain breaks in a place no one is watching.
Google demonstrated the next stage publicly at Cloud Next '26: a coding agent (their google-agents-cli with seven skills — workflow, scaffolding, code, evaluation, deployment, publishing, observability) builds a multi-agent application end-to-end. The human in the demo never opens the IDE, never reads the code. The framing, verbatim, was “we expect you guys to build agents with building agents.” The development lifecycle compresses from months to minutes. Three governance questions don’t come along for the ride.
Where does the coder-agent get its policy from?
The skills installed into the coding agent are markdown files. They can be edited, swapped, or poisoned the same way an MCP server can. There’s no CISO in the loop when an engineer adds a skill.
Who owns code the agent wrote?
The commit author is the agent. The named human reviewer approved the agent’s attestations — not the code itself. When change-management asks who’s accountable for an artifact in production, the answer is ambiguous by design.
What runtime IAM did the agent hold?
The coder-agent at authoring time typically has broad repo write, dependency install, and deployment privileges. The deployed code runs under a different, narrower scope. The boundary between those scopes is rarely documented — and rarely revoked when the authoring session ends.
SanctumShield’s Executive Risk Report flags this as the Agent-Authored Code Governance Gap when the customer’s engineering profile indicates coding-agent adoption (Cursor, Claude Code, Gemini CLI, Anthropic Antigravity, GitHub Copilot Workspace, google-agents-cli) without a documented policy chain. The AUP’s § 14.9 sub-clause covers the corresponding control set: policy provenance, named human accountability, change-management evidence, runtime IAM separation, and validator-pair attestation.
Sources: Google Cloud Next '26 sessions “Build AI Agents with Agents: Multi-Agent PR Roaster” (April 24, 2026) and the AI Agent Governance session (April 23, 2026, Google Cloud, with PayPal staff engineer Amir and Google principal engineer Srima Krishna).
Nine governance questions a board will ask.
SanctumShield answers each one with a specific artifact.
Adapted from the founder’s Cloud Next '26 perspective on the Agentic Era. Each question maps to a specific SanctumShield finding type, AUP section, or artifact — the answers your CISO can actually point at, not promises.
Agent Registry Readiness Checklist intake plus the Non-GCP Agent Discovery Gap finding. We catalog the agents Google Agent Platform's auto-registration cannot see — AWS Bedrock, Azure OpenAI, Anthropic API direct, self-hosted Ollama, sovereign deployments.
Agent Identity Model Gap finding. Triggered when agents authenticate with long-lived service account credentials, shared API keys, or embedded secrets rather than ephemeral SPIFFE-style identity. Cites NIST AI RMF MANAGE-1.3 + NIST SP 800-63 adapted for non-human identities.
Visibility-Access Mismatch finding. Triggered when a customer has catalogued AI tools or vendors but has not enforced per-application least-privilege access. Anchors to Google's own principle: visibility does not equal access.
Egress Mediation Gap finding. Triggered when agents are deployed without a consistent gateway, proxy, or sidecar mediating outbound calls — meaning no single point where egress policy can be enforced or observed. Cites NIST AI RMF MAP-4.1 + NIST SP 800-207 (Zero Trust).
Observability Readiness Gap finding. Triggered when agents are deployed without OpenTelemetry GenAI instrumentation or A2A trace headers. Cites EU AI Act Article 13 transparency obligations + SOC 2 CC7.2.
Every Executive Risk Report and AUP carries a verification URL queryable for 5 years. Cyber insurance underwriters and auditors paste the URL and confirm: when the report was generated, which model produced it, which registry version was used, which company. The contents are never exposed — verification only confirms authenticity.
AUP § 14.7 Kill-Switch. Every deployed agent must be subject to a documented kill-switch — a single action by the CISO's office (or designee) that immediately disables the agent across all environments. Procedure tested quarterly.
Agent-Authored Code Governance Gap finding plus AUP § 14.9. Five concrete controls: policy provenance, named human accountability, change-management evidence, runtime IAM separation, validator-pair attestation. Cites NIST AI RMF MANAGE-1.3 + SOC 2 CC8.1 + EU AI Act Article 14.
Executive Risk Report (5 regulation-anchored findings + 90-day action plan) and AI Acceptable Use Policy (14 sections, ~3,500–4,500 words). Plus the 1-page CEO-voice Board Memo derived from the same data. Each artifact carries the verification URL above.
The pattern: every governance question a board can ask maps to a specific artifact your CISO can hand them. That’s the SanctumShield model — not promises, not policy theatre, not consultant slideware. Findings with citations, policies with regulation anchors, and a verification URL that proves the whole thing is real.
See what an Agent Governance audit looks like — for your org.
Start with the free Shadow AI Risk Calculator. Twelve questions, sixty seconds, no account required. The full paid audit produces the board-ready Executive Risk Report and the regulation-anchored AI Acceptable Use Policy in under ten minutes total.