§ How It Works · A Real Example

A CISO-ready audit in 22 seconds.

Here's a real audit SanctumShield produced for a realistic healthcare scenario — the inputs we gave it, the output Claude returned, and what each finding means.

Every word below was generated by Claude Sonnet 4.5 from the exact inputs shown, in a single 22.5-second streaming response through the /api/audit/generate endpoint. Nothing was edited, rewritten, or embellished. This is what subscribers see in their dashboard on day one of the trial.

§ 01 · The Scenario

Meet Acme Health.

A fictional but realistic 150-person healthcare SaaS company operating in the US under HIPAA and SOC 2. Their CISO suspects employees are using consumer AI tools with patient data but has no visibility into which tools or how much. Their firewall logs hold the answer — they just need someone to read them against an AI endpoint registry and pair the result with a real risk model. They run the SanctumShield audit.

Input 01 · Company Profile
  • Name
    Acme Health
  • Industry
    Healthcare
  • Size
    51–200 employees
  • Jurisdictions
    United States
  • Frameworks
    HIPAA · SOC 2
Input 02 · Known AI Tools
  • ChatGPTHIGH
    Consumer tier trains on inputs unless opt-out is enabled. No BAA available on consumer plans.
  • CursorHIGH
    Sends source code to configured LLM backend. Privacy mode not on by default.
These were the only tools the CISO knew about. The log analysis revealed a third one they didn't.
Input 03 · Network Log Analysis
142 outbound connections analyzed against our AI endpoint registry. Matched:
  • 89×
    chat.openai.com
    Consumer ChatGPT
  • 12×
    character.ai
    Roleplay chatbot · no healthcare compliance
Character.AI was the surprise. Not in the CISO's tool list, not in any policy, 12 confirmed connections from inside the network — and explicitly not HIPAA-compatible.
§ 02 · What Claude Produced

22 seconds later, a board-ready report.

The Sanctum passed the three inputs above into our policy-anchored prompt. The Shield streamed the response from Claude Sonnet 4.5 as it generated. The full JSON output is roughly 5,000 words across eight findings, a regulatory exposure matrix, a 90-day action plan, and a departmental breakdown. Here are the first pieces:

Overall Risk Score
82/100
CRITICAL
Board-Ready Headline

“Unauthorized AI tools with 101 confirmed network connections are processing PHI without HIPAA-compliant safeguards or Business Associate Agreements.”

Quotable verbatim in a board meeting. No editing needed.
Executive Summary

Network traffic analysis reveals unauthorized use of AI tools processing protected health information (PHI) without Business Associate Agreements, creating immediate HIPAA violation risk. Of 142 outbound connections analyzed, 89 connected to OpenAI's consumer ChatGPT service and 12 to Character.AI — a roleplay chatbot platform with no healthcare compliance certifications. Both services train models on user inputs by default, meaning PHI entered by employees may now reside in third-party training datasets permanently.

The company confirmed only 2 AI tools in use but acknowledged the total is unknown, indicating zero visibility into shadow AI adoption across the 51–200 person workforce. Additionally, Cursor IDE is sending source code — potentially containing hardcoded credentials, patient identifiers, or proprietary algorithms — to external LLM backends without code sanitization controls. This represents a critical gap in both technical safeguards (HIPAA Security Rule §164.312) and vendor management (HIPAA Privacy Rule §164.502).

F1
CRITICAL

PHI Exposure via Consumer-Grade AI Services Without BAAs

Network logs show 89 connections to chat.openai.com (consumer ChatGPT) and 12 to character.ai, neither of which have Business Associate Agreements with Acme Health. Consumer-tier ChatGPT explicitly trains on user inputs unless enterprise agreements prohibit it. Character.AI is a roleplay chatbot platform with no healthcare compliance framework and unknown data retention policies. Any PHI entered into these services constitutes an unauthorized disclosure under HIPAA.

Evidence
89 hits to chat.openai.com and 12 to character.ai across 142 total connections. ChatGPT confirmed in AI tools inventory. Character.AI discovered via network analysis, not disclosed in self-assessment.
Business Impact
Potential HIPAA breach requiring notification to HHS OCR. OCR penalties range from $100 to $50,000 per violation with annual maximums of $1.5M per violation category. Reputational harm in a trust-dependent market.
Regulatory Citations
HIPAA Privacy Rule §164.502(e)HIPAA Security Rule §164.312SOC 2 CC6.1

These are real HIPAA clauses. §164.502(e) governs permitted disclosures to business associates — and specifically requires a BAA before PHI is shared. §164.312 governs technical safeguards for protecting electronic PHI. Claude cited them because the prompt explicitly asked it to map findings to the frameworks Acme Health declared in scope (HIPAA + SOC 2), and because the model has enough context about healthcare regulation to do so accurately.

→ The stream continues with seven more findings, a tool_risks array, a regulatory_exposure matrix across four frameworks, and a 90-day action plan prioritized by severity and effort.
§ 03 · Why This Is Hard Anywhere Else

What this would cost
anywhere else.

The audit you just read is not a generic policy template. It's specific (Acme Health, 150 people), regulation-anchored (HIPAA clauses cited by number), and log-verified (89 and 12 are real hit counts, not estimates). Every other path to this same output looks like this:

PathTimeCostWhat You Get
Big 4 consulting engagement6–12 weeks$50K–250KGeneric frameworks, PowerPoint deliverable
Healthcare privacy counsel2–4 weeks$15K–40KLegally defensible, no log analysis
Palo Alto AI Access SecurityQuarters$80K+/yearEnterprise-only, needs dedicated sec team
DIY: ChatGPT + template1 weekend$0No citations, no logs, fails SOC 2 audit
SanctumShield22 seconds$99–899/moRegulation-anchored, log-verified, board-ready

The economics are decisive. A single Big 4 engagement is ~10 years of SanctumShield Scale. A single billable hour with healthcare privacy counsel is ~3 months of SanctumShield Business. The CISO does not need permission to spend $299/month.

§ 04 · How We Validate This

The test harness.

The audit above is not a marketing mockup. It's the live output of three endpoints we run as an end-to-end test against every deploy. If any of them regresses, the deploy is blocked. Here's what the test harness actually does:

T1

Free Shadow AI Risk Calculator

FREE · INSTANT

A ten-question self-assessment that scores any organization's exposure to unmanaged AI tool usage in under sixty seconds. Results are deterministic, immediate, and free — no account required, no charge, no waiting. The output names the three most pressing gaps in your current AI governance posture and quantifies your risk on a 0–100 scale across four tiers.

Free, instant risk score
Three high-severity findings tailored to your answers
Zero cost · no account required
T2

Network Log Analysis & AI Endpoint Detection

EVIDENCE-BASED

Paste a list of outbound hostnames from any firewall, proxy, or DNS log. SanctumShield matches them against a continuously maintained registry of 64 known AI endpoints with suffix-aware matching (so eu.api.openai.com still hits api.openai.com). The registry covers consumer LLMs, enterprise APIs, multi-model gateways, code AI, image / video / voice generation, and high-risk jurisdictions.

Real outbound AI traffic identified, not self-reported
Per-endpoint hit counts with vendor and category
Pre-rated risk tier for every match
T3

Executive Risk Report & AI Acceptable Use Policy

CISO-READY

A full audit pipeline that combines your company profile, your tools inventory, your network log analysis, and your declared compliance frameworks into two board-ready deliverables: an Executive Risk Report and a customized AI Acceptable Use Policy. Outputs cite specific regulatory clauses (HIPAA §164.502(e), SOC 2 CC6.1, EU AI Act Article 5, NIST AI RMF, GDPR Article 22) and include a prioritized 90-day action plan with named owners and effort levels.

CISO-grade audit report with five regulation-anchored findings
14-section AI Acceptable Use Policy customized to industry and frameworks
Downloadable as Word (.docx), Markdown, plain text, and HTML

Every SanctumShield deployment runs this end-to-end test harness before going live. If any of the three regresses, the deploy is blocked and the team is alerted before a customer ever sees it.

§ 05 · Why You Want This

Three reasons a CISO
writes the check today.

01

The board is already asking.

AI governance moved from an IT question to a board question in 2026. Every director has read about an AI data breach. They're asking the CISO for a posture update. SanctumShield gives the CISO an answer the same week, with specific numbers — 89 connections, 12 connections, $1.5M annual OCR max — that read as credible rather than hand-wavy.

02

Generic templates fail audits.

SOC 2 and HIPAA auditors no longer accept a generic AI policy copied from a template. They want a policy that names the tools actually in use, maps them to specific regulatory clauses, and shows evidence of enforcement. SanctumShield produces all three — the policy, the mapping, and the evidence trail — from the audit-generation pipeline itself.

03

The landscape changes monthly.

Every month a new AI tool shows up in your employees' browsers. Every quarter a regulation updates. Every year your SOC 2 comes around again. A one-time audit is stale in 30 days. SanctumShield re-scans on a monthly cadence and flags drift automatically — which is why the subscription is defensible, not arbitrary.

Run it on your
actual organization.

Start with the free Shadow AI Risk Calculator — twelve questions, no account required, score in sixty seconds. If the number is ugly (and for most organizations in 2026, it is), you'll know within a minute what a full SanctumShield audit would look like for your company.

How SanctumShield Works — A Real Audit in 22 Seconds