Sample Outputs · Fictional Company

See the actual artifacts.
Before you pay $99 to generate your own.

Three SanctumShield artifacts arranged in a triad: AI Acceptable Use Policy (13 sections + 3 appendices), Executive Risk Report (4-layer Shadow AI audit with severity-ranked findings and 90-day action plan), and Board Memo (1-page CEO-voice summary). Each carries a unique five-year verification URL.
Three audit-grade artifacts. Five-year verification URL on every one.

Three rendered SanctumShield deliverables for a fictional 240-employee healthcare SaaS named Acme Health — the same Acme Health referenced on the Explainer and How It Works pages. Every value is fictional. No real PHI, no real findings against any real organization.

§ 01 · Executive Risk Report

5 findings · TOXIC_COMBINATION callout · impact-first severity rationale.

Sample shows: 5 regulation-anchored findings ordered by severity (CRITICAL → MEDIUM), one TOXIC_COMBINATION cross-cutting callout, per-finding confidence rating, severity rationale (Sprint 4 P30 impact-first model), policy coverage count (Sprint 4 P29), and a 90-day action plan.

SAMPLE · fictional company · no real PHI · for evaluation only

Executive Shadow AI Risk Audit

Acme Health, Inc.

Report ID: SS-ACME-20260426-SAMPLE
Report Date: 2026-04-26
Classification: Sample · Fictional Company · For Evaluation Only
Overall Risk
78/100
HIGH
“Acme Health has high AI governance exposure driven by unmanaged Copilot on a PHI tenant.”

Executive Summary

Acme Health operates in a regulated healthcare environment with substantial AI exposure across all three layers of the SanctumShield assessment model. The most material risk is unmanaged Microsoft 365 Copilot usage on a tenant containing Protected Health Information (PHI), with no AUP coverage and no business associate agreement specific to the Copilot service. Without immediate remediation, Acme Health is one configuration change away from a HIPAA notification trigger.

Four additional findings — unmanaged Cursor agent usage on engineering laptops, no formal AI Acceptable Use Policy, no published sub-processor disclosures, and an unrestricted BYOD posture invisible to MDM — compound the posture into a TOXIC_COMBINATION. Recommended remediation begins within seven days; full closure is achievable inside 90 days under the action plan in this report.

⚠️ TOXIC_COMBINATION — Cross-Cutting Severity

The following findings, while individually acceptable, combine into postures that are exploitable in the aggregate. Address as a bundle, not in isolation.

CRITICAL

Untracked Copilot exposure on PHI tenant

Findings F1 + F3 + F4 combine into an aggregate posture where PHI flows through an AI surface with no policy, no sub-processor disclosure, and no executive review cycle. Each finding is individually addressable; together they constitute an exploitable governance exposure.

Composing findings: F1, F3, F4
Why toxic: This composite would be a finding on its own under HIPAA, GDPR Article 28, and SOC 2 CC5.3 simultaneously — and would be excluded by most cyber insurance policies as an uncovered AI-attributable loss.

Key Findings

F1 CRITICAL Confidence: HIGH

Microsoft 365 Copilot uncovered on PHI tenant

Why CRITICAL: CRITICAL because PHI is one Copilot configuration change away from triggering HIPAA §164.404 notification.

Copilot is enabled on the Acme tenant and reads SharePoint, OneDrive, and Exchange data including PHI. The current AUP does not address Copilot, and no BAA addendum has been executed with Microsoft for the Copilot service specifically.

Evidence: Embedded AI inventory (Layer 2) reports M365 Copilot enabled. AUP version 2.1 dated 2025-06 contains no Copilot reference. Microsoft Customer Agreement on file does not enumerate Copilot among in-scope services.
Business Impact: A single Copilot configuration change (or an employee paste of a patient record into a Copilot chat) could trigger a HIPAA Breach Notification Rule disclosure to HHS and affected individuals within 60 days.
HIPAA §164.502(e) HIPAA §164.404
Policy Coverage: 2 policy documents · 2 executive signatures required · touches HIPAA, SOC 2
F2 HIGH Confidence: HIGH

Unmanaged Cursor agent usage on engineering laptops

Agent-as-Developer · Authored Code Gap
Why HIGH: HIGH because the data class at risk (source code containing customer identifiers) is one classification away from breach-notification proximity.

Engineering team uses Cursor with autonomous-agent features enabled. No AUP coverage, no review cadence, no documented allowlist of MCP servers Cursor agents are permitted to invoke.

Evidence: Network log analysis matched cursor.sh in 1,247 outbound connections over the trailing 30 days from 14 distinct engineering endpoints.
Business Impact: Source code containing customer identifiers, API keys, and proprietary algorithms may be transmitted to Cursor sub-processors without DPA coverage or sub-processor disclosure.
HIPAA §164.502(e) GDPR Article 28
Policy Coverage: 1 policy document · 1 executive signature required · touches HIPAA, GDPR
F3 HIGH Confidence: HIGH

No formal AI Acceptable Use Policy in place

Pillar 4 · Policies
Why HIGH: HIGH because absence of an AUP triggers an SOC 2 audit exception with insurance-renewal exclusion implications.

Acme has no published AI Acceptable Use Policy covering AI tools, agents, or embedded-AI features. The policy gap blocks SOC 2 attestation for AI-specific controls and creates a documented exception under EU AI Act Article 14 for any high-risk processing.

Evidence: No AUP document found in the policy register. SOC 2 Type II readiness assessment dated 2026-Q1 flagged this as a CC5.3 exception.
Business Impact: Annual SOC 2 audit will note an exception against CC5.3. Cyber insurance underwriting questionnaire renewal due 2026-Q3 explicitly requires an AI AUP.
SOC 2 CC5.3 EU AI Act Article 14
Policy Coverage: 1 policy document · 3 executive signatures required · touches SOC 2, HIPAA, EU AI Act
F4 MEDIUM Confidence: MEDIUM

No sub-processor disclosure for AI vendors

Why MEDIUM: MEDIUM because this is a control gap with no current evidence of exploitation, but it has surfaced in two recent customer questionnaires.

Acme has not maintained a sub-processor list documenting which AI vendors process customer data on the platform's behalf. GDPR Article 28 requires data controllers (Acme's customers) to be notified of new sub-processors with the right to object.

Evidence: Trust portal at acmehealth.example/trust does not list AI sub-processors. Customer DPAs reference an outdated sub-processor list dated 2024-09.
Business Impact: Customer DPA Article 28 obligations are not currently met. Two enterprise customers have flagged this in vendor-review questionnaires received in the trailing 60 days.
GDPR Article 28
Policy Coverage: 1 policy document · 1 executive signature required · touches GDPR
F5 MEDIUM Confidence: MEDIUM

BYOD AI posture invisible to MDM, EDR, and DLP simultaneously

Why MEDIUM: MEDIUM because the gap exists but no current exfiltration evidence is present in the data provided.

Personal devices are allowed unrestricted access to corporate Google Workspace and M365 via OAuth. The AUP does not prohibit personal-device AI tools (Cursor, Ollama, Claude Desktop, n8n) on corporate data. This is a Layer 3 blind spot the existing security stack cannot see.

Evidence: BYOD posture intake reports unrestricted personal-device access. Conditional access does not require a managed device for OAuth. No prohibition in AUP § 5.
Business Impact: An employee on a personal laptop can authenticate to corporate Workspace, paste PHI into a personal ChatGPT subscription, and the action will be invisible to MDM, EDR, network monitoring, and DLP.
HIPAA §164.312
Policy Coverage: 1 policy document · 2 executive signatures required · touches HIPAA

Tool Risks

ToolRiskRecommendationWhy
Microsoft 365 Copilot (Layer 2 · embedded) CRITICAL CONDITIONAL Reads PHI tenant data with no AUP coverage and no Copilot-specific BAA addendum.
Cursor (Layer 1 · direct) HIGH CONDITIONAL Autonomous-agent features enabled on engineering source code containing customer identifiers.
Google Workspace Gemini (Layer 2 · embedded) HIGH CONDITIONAL Same scope as Copilot; AUP also does not cover.
ChatGPT (Layer 1 · direct) MEDIUM CONDITIONAL 443 connections in the trailing 30 days; consumer-tier accounts cannot be confirmed.
Notion AI (Layer 2 · embedded) MEDIUM APPROVE Knowledge-base search agent on workspace containing customer data.

Regulatory Exposure

Acme Health's declared scope (HIPAA + SOC 2 + GDPR) is materially exposed by the combination of unmanaged Copilot on a PHI tenant and the absence of an AI AUP. Under HIPAA §164.502(e) the lack of a Copilot-specific BAA exposes Acme to penalty risk if PHI flows through the service. Under SOC 2 CC5.3 the AUP gap is a documented exception that the 2026-Q3 Type II audit will flag. Under GDPR Article 28 the missing sub-processor disclosure triggers the right to object for any EU-resident customer. Address within 30 days to avoid a HIPAA notification cascade and a SOC 2 audit exception.

90-Day Action Plan

  1. Week 1 — Disable Copilot tenant-wide pending policy adoption and Microsoft Copilot-specific BAA execution.
    Owner: CISO · Effort: LOW · Confidence: HIGH
  2. Week 2 — Adopt SanctumShield-generated AI Acceptable Use Policy (14 sections, regulation-anchored) and route for CISO + GC + CEO signature.
    Owner: Legal · Effort: MEDIUM · Confidence: HIGH
  3. Week 4 — Publish AI sub-processor disclosure on the trust page and notify enterprise customers with a 30-day right-to-object notice.
    Owner: Legal · Effort: MEDIUM · Confidence: MEDIUM
  4. Week 6 — Implement managed-device conditional access for corporate OAuth across Google Workspace and M365.
    Owner: IT · Effort: HIGH · Confidence: MEDIUM
  5. Week 8 — Roll out company-wide AI safe-use training with mandatory acknowledgment tied to the new AUP.
    Owner: HR · Effort: MEDIUM · Confidence: HIGH
§ 02 · AI Acceptable Use Policy (excerpt — 4 of 14 sections)

Regulation-anchored. Customized. Auditor-readable.

The full policy is 14 sections plus 3 appendices, ~3,500-4,500 words, customized to the customer’s industry, jurisdictions, and frameworks. Sample shows § 1 Purpose & Scope, § 2 Definitions, § 5 Healthcare-Specific Restrictions (HIPAA-anchored, customer-specific), and § 14 Deployed Agent Policy with the new § 14.9 Agent-Authored Code Governance sub-clause. An outside-counsel-generated AUP of comparable depth typically costs $5,000 to $25,000+; SanctumShield delivers it as part of the $99/month subscription.

SAMPLE · fictional company · 4 of 14 sections · for evaluation only

AI Acceptable Use Policy

Sample excerpt · 4 of 14 sections

Company: Acme Health, Inc.
Department: Company-wide
Version: 1.0 · Sample
Effective Date: 2026-05-01
Next Review: 2027-05-01
Policy Owner: Chief Information Security Officer
Classification: Sample · Fictional Company · For Evaluation Only

Executive Summary

This Policy governs the use of artificial intelligence tools, agents, and embedded-AI features at Acme Health, Inc. It is anchored to HIPAA §164.502(e), §164.312, SOC 2 Common Criteria CC5.3 and CC7.2, GDPR Articles 28 and 35, the EU AI Act (Articles 13, 14, 50, 52), and the NIST AI Risk Management Framework. The Policy applies to all employees, contractors, and agents acting on Acme Health's behalf. The full Policy contains 14 sections plus three appendices; this excerpt shows § 1, § 2, § 5, and § 14.

Section 1: Purpose and Scope

1.1 Purpose

This Policy establishes the rules under which artificial intelligence (AI) tools may be used by Acme Health, Inc. ("Acme Health" or "the Company") personnel in the course of their work. AI tools are powerful and increasingly pervasive; their use creates legal, regulatory, contractual, and reputational risks that are categorically different from those posed by traditional information technology.

This Policy exists to manage those risks while preserving the productivity and innovation benefits AI tools provide.

1.2 Scope

This Policy applies to:

  • All employees, contractors, consultants, and temporary staff acting on Acme Health's behalf
  • All AI tools, including but not limited to: generative AI (text, code, image, audio, video), AI assistants, AI agents, autonomous-agent frameworks, AI features embedded in business applications, and AI-enabled browser extensions
  • All Acme Health information systems and any personal devices used to access Company data
  • All Acme Health data, customer data, employee data, vendor data, and third-party data the Company processes

1.3 Effective Date

This Policy is effective 2026-05-01 and supersedes any prior AI use guidance issued by Acme Health.

Section 2: Definitions

For the purposes of this Policy:

AI Tool — Any software product or service that uses machine learning, natural language processing, or generative artificial intelligence to produce text, code, images, audio, video, recommendations, classifications, or other outputs.

AI Agent — An AI tool that operates with a degree of autonomy, including the ability to use other tools, call APIs, retrieve data, take actions, or coordinate with other agents.

Embedded AI — AI features built into a business application (Microsoft 365 Copilot, Google Workspace Gemini, Salesforce Agentforce, Notion AI, etc.) where the AI inference is proxied through the SaaS vendor's own infrastructure.

Sanctioned AI Tool — An AI tool that has been reviewed and approved for specific use cases per the AI Tools Registry maintained by the CISO's office (Appendix A).

Confidential Information — Acme Health business information that is not public, including personnel records, customer lists, financial data, source code, product roadmaps, and contractual terms.

Restricted Information — Acme Health information subject to specific legal, contractual, or regulatory restrictions, including Protected Health Information (PHI) under HIPAA.

Section 5: Healthcare-Specific Restrictions (HIPAA Compliance)

Because Acme Health processes Protected Health Information (PHI) as defined under HIPAA, the following additional restrictions apply to any AI tool used in connection with patient data, clinical operations, or revenue cycle management:

5.1 Business Associate Agreement Required

No AI tool may process PHI without a Business Associate Agreement (BAA) executed with the AI vendor that explicitly includes the AI service as in-scope. This includes embedded-AI features such as Microsoft 365 Copilot, Google Workspace Gemini, and Salesforce Agentforce — the underlying enterprise BAA does not automatically extend to AI features.

5.2 Prohibited Embedded-AI Configurations

Until a Copilot-specific BAA addendum is executed and the data flow is documented, the following Microsoft 365 Copilot configurations are PROHIBITED on the Acme Health tenant:

  • Copilot indexing of any SharePoint site or OneDrive folder containing PHI
  • Copilot summarization of any Outlook conversation containing PHI
  • Copilot transcription or summarization of Teams meetings discussing PHI

5.3 Patient-Facing AI Disclosure

Per HIPAA §164.520 (Notice of Privacy Practices) and applicable state law, any AI tool that interacts directly with a patient (chatbot, intake assistant, AI scheduler) must be disclosed in the Notice of Privacy Practices and any patient-facing consent forms. The disclosure must specify what AI is used for, what data it accesses, and what controls govern that access.

5.4 Clinical Decision Support

AI tools used to support clinical decisions are subject to additional governance under § 164.312(c)(1) (integrity controls) and the FDA's Software as a Medical Device (SaMD) guidance where applicable. Clinical decision support tools require executive sign-off from the Chief Medical Officer in addition to the CISO.

5.5 Sub-Processor Disclosure

Per GDPR Article 28 (where applicable to EU-resident patient data) and customer DPAs, the AI sub-processor list maintained at acmehealth.example/trust must be updated within 30 days of any new AI vendor onboarding. Customers retain the right to object to a new sub-processor and have 30 days from notification to do so.

Section 14: Deployed Agent Policy

14.1 Scope

This Section applies whenever Acme Health deploys, builds, hosts, or integrates an AI agent — including no-code agents created in platforms such as Microsoft Copilot Studio, Salesforce Agentforce, Gemini Enterprise custom agents, or ServiceNow Agentic AI — and whenever Acme Health consumes Model Context Protocol (MCP) servers or participates in agent-to-agent (A2A) discovery chains.

14.2 Agent Registration

Every agent Acme Health deploys must be registered in the agent inventory maintained by the CISO's office before it is enabled in production. Registration captures: agent name, owner, hosting platform, declared tools and skills, identity model, data classes the agent is permitted to access, and the application boundary the agent operates within.

14.3 MCP Server Allowlist

Acme Health maintains a vetted allowlist of permitted MCP servers. Connecting an agent to any MCP server not on the allowlist is prohibited. The allowlist is reviewed at the same cadence as the AI Tools Registry. Allowlisted MCP servers are subject to re-validation at minimum quarterly to detect Discovery Poisoning.

14.7 Kill-Switch

Every deployed agent must be subject to a documented kill-switch — a single action by the CISO's office (or designee) that immediately disables the agent across all environments. The kill-switch procedure is tested quarterly.

14.9 Agent-Authored Code Governance

Where Acme Health engineering teams use coding agents that author production code (Cursor, Claude Code, Anthropic Antigravity, Gemini CLI, GitHub Copilot Workspace, or equivalent), the following controls apply: (a) policy provenance — the coding agent's policy and security context must be documented; (b) named human accountability — every agent commit, PR, or deployment must carry a named human owner; (c) change-management evidence — coding agents must operate within the same code review and CI/CD gating as human-authored code; (d) runtime IAM separation — the runtime IAM the coding agent holds at authoring time must be distinct from the runtime IAM the deployed code itself runs under; (e) validator-pair attestation — where a coder-validator agent pair is used, the validator's attestations are recorded as evidence but do not substitute for the human accountability requirement.

Appendices

Appendix A: Approved AI Tools Registry (sample excerpt)

The full registry is maintained by the CISO's office. The excerpt below is illustrative of the format.

ToolCategoryApproved Use CasesData Class LimitApprovedReview
Microsoft 365 CopilotEmbedded productivityDocument drafting (non-PHI), meeting summarization (non-PHI)Confidential (non-PHI)2026-05-152026-11-15
Notion AIEmbedded knowledgeKnowledge-base search, document summarizationConfidential2026-04-012026-10-01
CursorDirect developer toolCode authoring with mandatory PR review; agentic features DENIED until § 14.9 controls in placeConfidential (no customer identifiers)2026-04-152026-07-15
ChatGPT EnterpriseDirect conversationalResearch, drafting, brainstorming (no Restricted data)Confidential (no PHI)2026-04-012026-10-01
§ 03 · Board Memo (1-page CEO-voice)

Same data, different artifact, different audience.

Derived from the same audit as § 01 above. CEO voice (first-person plural), plain English, no jargon, no CEL or SPIFFE acronyms. The document a CEO actually hands up to the board audit committee — the Executive Risk Report stays in the CISO’s briefcase as the supporting detail.

SAMPLE · fictional company · for evaluation only

Board Memo — AI Governance Risk

Acme Health, Inc.

Date: 2026-04-26
Source Report ID: SS-ACME-20260426-SAMPLE
Classification: Sample · Fictional Company · For Evaluation Only

To the Board of Directors and Audit Committee of Acme Health

We have completed a comprehensive review of our artificial intelligence risk posture and identified a high level of exposure that requires immediate attention. Our most significant concern is the unmanaged use of Microsoft 365 Copilot within our digital environment that contains protected health information. Currently, this tool is active without the necessary policy safeguards or specific legal agreements required for healthcare data. This creates a situation where a single configuration change could trigger a mandatory data breach notification under federal law. The lack of a formal acceptable use policy for AI tools also threatens our upcoming security audit certifications. We are taking decisive action to pause these services until we have established the proper governance, legal disclosures, and employee training to ensure we utilize these technologies safely and remain in full compliance with healthcare privacy regulations.

What we found

  • We discovered that Microsoft 365 Copilot is active on our primary data systems and accessing protected health information without the required HIPAA business associate agreements or usage policies in place.
  • Our engineering team is using an AI coding assistant called Cursor that may be transmitting sensitive source code and customer identifiers to external providers without proper data protection oversight.
  • We currently lack a formal AI Acceptable Use Policy and a public list of AI sub-processors, which will result in a failure to meet our SOC 2 audit requirements and international data privacy obligations.

What we are doing

  • Within the first week, our Chief Information Security Officer is disabling Copilot across the entire organization until we have adopted formal safety policies and executed the necessary business associate agreements.
  • By the second week, our Legal team will implement a new AI Acceptable Use Policy to provide clear rules for how our employees interact with these tools, with sign-off from the CISO, General Counsel, and CEO.
  • By the fourth week, our Legal team will publish a formal disclosure of all AI vendors we use and notify enterprise customers, ensuring we meet our transparency requirements for regulators and contractual obligations to the business.

I welcome the Board's questions regarding these remediation steps at our next session.

Signature
[CEO Name]
Chief Executive Officer

Generate your own — for your company, department, or line of business.

$99/month. Same artifacts, customized to your industry, jurisdictions, frameworks, and AI tool inventory. Run the free Shadow AI Risk Calculator first to see the assessment style; the full subscription produces all three deliverables (plus the verification URL) in minutes from a guided assessment.

Sample Outputs — Executive Risk Report, AUP excerpt, Board Memo (Acme Health)