See the actual artifacts.
Before you pay $99 to generate your own.

Three rendered SanctumShield deliverables for a fictional 240-employee healthcare SaaS named Acme Health — the same Acme Health referenced on the Explainer and How It Works pages. Every value is fictional. No real PHI, no real findings against any real organization.
5 findings · TOXIC_COMBINATION callout · impact-first severity rationale.
Sample shows: 5 regulation-anchored findings ordered by severity (CRITICAL → MEDIUM), one TOXIC_COMBINATION cross-cutting callout, per-finding confidence rating, severity rationale (Sprint 4 P30 impact-first model), policy coverage count (Sprint 4 P29), and a 90-day action plan.
Executive Shadow AI Risk Audit
Acme Health, Inc.
Executive Summary
Acme Health operates in a regulated healthcare environment with substantial AI exposure across all three layers of the SanctumShield assessment model. The most material risk is unmanaged Microsoft 365 Copilot usage on a tenant containing Protected Health Information (PHI), with no AUP coverage and no business associate agreement specific to the Copilot service. Without immediate remediation, Acme Health is one configuration change away from a HIPAA notification trigger.
Four additional findings — unmanaged Cursor agent usage on engineering laptops, no formal AI Acceptable Use Policy, no published sub-processor disclosures, and an unrestricted BYOD posture invisible to MDM — compound the posture into a TOXIC_COMBINATION. Recommended remediation begins within seven days; full closure is achievable inside 90 days under the action plan in this report.
⚠️ TOXIC_COMBINATION — Cross-Cutting Severity
The following findings, while individually acceptable, combine into postures that are exploitable in the aggregate. Address as a bundle, not in isolation.
Untracked Copilot exposure on PHI tenant
Findings F1 + F3 + F4 combine into an aggregate posture where PHI flows through an AI surface with no policy, no sub-processor disclosure, and no executive review cycle. Each finding is individually addressable; together they constitute an exploitable governance exposure.
Key Findings
Microsoft 365 Copilot uncovered on PHI tenant
Copilot is enabled on the Acme tenant and reads SharePoint, OneDrive, and Exchange data including PHI. The current AUP does not address Copilot, and no BAA addendum has been executed with Microsoft for the Copilot service specifically.
Unmanaged Cursor agent usage on engineering laptops
Engineering team uses Cursor with autonomous-agent features enabled. No AUP coverage, no review cadence, no documented allowlist of MCP servers Cursor agents are permitted to invoke.
No formal AI Acceptable Use Policy in place
Acme has no published AI Acceptable Use Policy covering AI tools, agents, or embedded-AI features. The policy gap blocks SOC 2 attestation for AI-specific controls and creates a documented exception under EU AI Act Article 14 for any high-risk processing.
No sub-processor disclosure for AI vendors
Acme has not maintained a sub-processor list documenting which AI vendors process customer data on the platform's behalf. GDPR Article 28 requires data controllers (Acme's customers) to be notified of new sub-processors with the right to object.
BYOD AI posture invisible to MDM, EDR, and DLP simultaneously
Personal devices are allowed unrestricted access to corporate Google Workspace and M365 via OAuth. The AUP does not prohibit personal-device AI tools (Cursor, Ollama, Claude Desktop, n8n) on corporate data. This is a Layer 3 blind spot the existing security stack cannot see.
Tool Risks
| Tool | Risk | Recommendation | Why |
|---|---|---|---|
| Microsoft 365 Copilot (Layer 2 · embedded) | CRITICAL | CONDITIONAL | Reads PHI tenant data with no AUP coverage and no Copilot-specific BAA addendum. |
| Cursor (Layer 1 · direct) | HIGH | CONDITIONAL | Autonomous-agent features enabled on engineering source code containing customer identifiers. |
| Google Workspace Gemini (Layer 2 · embedded) | HIGH | CONDITIONAL | Same scope as Copilot; AUP also does not cover. |
| ChatGPT (Layer 1 · direct) | MEDIUM | CONDITIONAL | 443 connections in the trailing 30 days; consumer-tier accounts cannot be confirmed. |
| Notion AI (Layer 2 · embedded) | MEDIUM | APPROVE | Knowledge-base search agent on workspace containing customer data. |
Regulatory Exposure
Acme Health's declared scope (HIPAA + SOC 2 + GDPR) is materially exposed by the combination of unmanaged Copilot on a PHI tenant and the absence of an AI AUP. Under HIPAA §164.502(e) the lack of a Copilot-specific BAA exposes Acme to penalty risk if PHI flows through the service. Under SOC 2 CC5.3 the AUP gap is a documented exception that the 2026-Q3 Type II audit will flag. Under GDPR Article 28 the missing sub-processor disclosure triggers the right to object for any EU-resident customer. Address within 30 days to avoid a HIPAA notification cascade and a SOC 2 audit exception.
90-Day Action Plan
-
Week 1 — Disable Copilot tenant-wide pending policy adoption and Microsoft Copilot-specific BAA execution.
Owner: CISO · Effort: LOW · Confidence: HIGH -
Week 2 — Adopt SanctumShield-generated AI Acceptable Use Policy (14 sections, regulation-anchored) and route for CISO + GC + CEO signature.
Owner: Legal · Effort: MEDIUM · Confidence: HIGH -
Week 4 — Publish AI sub-processor disclosure on the trust page and notify enterprise customers with a 30-day right-to-object notice.
Owner: Legal · Effort: MEDIUM · Confidence: MEDIUM -
Week 6 — Implement managed-device conditional access for corporate OAuth across Google Workspace and M365.
Owner: IT · Effort: HIGH · Confidence: MEDIUM -
Week 8 — Roll out company-wide AI safe-use training with mandatory acknowledgment tied to the new AUP.
Owner: HR · Effort: MEDIUM · Confidence: HIGH
Regulation-anchored. Customized. Auditor-readable.
The full policy is 14 sections plus 3 appendices, ~3,500-4,500 words, customized to the customer’s industry, jurisdictions, and frameworks. Sample shows § 1 Purpose & Scope, § 2 Definitions, § 5 Healthcare-Specific Restrictions (HIPAA-anchored, customer-specific), and § 14 Deployed Agent Policy with the new § 14.9 Agent-Authored Code Governance sub-clause. An outside-counsel-generated AUP of comparable depth typically costs $5,000 to $25,000+; SanctumShield delivers it as part of the $99/month subscription.
AI Acceptable Use Policy
Sample excerpt · 4 of 14 sections
Executive Summary
This Policy governs the use of artificial intelligence tools, agents, and embedded-AI features at Acme Health, Inc. It is anchored to HIPAA §164.502(e), §164.312, SOC 2 Common Criteria CC5.3 and CC7.2, GDPR Articles 28 and 35, the EU AI Act (Articles 13, 14, 50, 52), and the NIST AI Risk Management Framework. The Policy applies to all employees, contractors, and agents acting on Acme Health's behalf. The full Policy contains 14 sections plus three appendices; this excerpt shows § 1, § 2, § 5, and § 14.
Section 1: Purpose and Scope
1.1 Purpose
This Policy establishes the rules under which artificial intelligence (AI) tools may be used by Acme Health, Inc. ("Acme Health" or "the Company") personnel in the course of their work. AI tools are powerful and increasingly pervasive; their use creates legal, regulatory, contractual, and reputational risks that are categorically different from those posed by traditional information technology.
This Policy exists to manage those risks while preserving the productivity and innovation benefits AI tools provide.
1.2 Scope
This Policy applies to:
- All employees, contractors, consultants, and temporary staff acting on Acme Health's behalf
- All AI tools, including but not limited to: generative AI (text, code, image, audio, video), AI assistants, AI agents, autonomous-agent frameworks, AI features embedded in business applications, and AI-enabled browser extensions
- All Acme Health information systems and any personal devices used to access Company data
- All Acme Health data, customer data, employee data, vendor data, and third-party data the Company processes
1.3 Effective Date
This Policy is effective 2026-05-01 and supersedes any prior AI use guidance issued by Acme Health.
Section 2: Definitions
For the purposes of this Policy:
AI Tool — Any software product or service that uses machine learning, natural language processing, or generative artificial intelligence to produce text, code, images, audio, video, recommendations, classifications, or other outputs.
AI Agent — An AI tool that operates with a degree of autonomy, including the ability to use other tools, call APIs, retrieve data, take actions, or coordinate with other agents.
Embedded AI — AI features built into a business application (Microsoft 365 Copilot, Google Workspace Gemini, Salesforce Agentforce, Notion AI, etc.) where the AI inference is proxied through the SaaS vendor's own infrastructure.
Sanctioned AI Tool — An AI tool that has been reviewed and approved for specific use cases per the AI Tools Registry maintained by the CISO's office (Appendix A).
Confidential Information — Acme Health business information that is not public, including personnel records, customer lists, financial data, source code, product roadmaps, and contractual terms.
Restricted Information — Acme Health information subject to specific legal, contractual, or regulatory restrictions, including Protected Health Information (PHI) under HIPAA.
Section 5: Healthcare-Specific Restrictions (HIPAA Compliance)
Because Acme Health processes Protected Health Information (PHI) as defined under HIPAA, the following additional restrictions apply to any AI tool used in connection with patient data, clinical operations, or revenue cycle management:
5.1 Business Associate Agreement Required
No AI tool may process PHI without a Business Associate Agreement (BAA) executed with the AI vendor that explicitly includes the AI service as in-scope. This includes embedded-AI features such as Microsoft 365 Copilot, Google Workspace Gemini, and Salesforce Agentforce — the underlying enterprise BAA does not automatically extend to AI features.
5.2 Prohibited Embedded-AI Configurations
Until a Copilot-specific BAA addendum is executed and the data flow is documented, the following Microsoft 365 Copilot configurations are PROHIBITED on the Acme Health tenant:
- Copilot indexing of any SharePoint site or OneDrive folder containing PHI
- Copilot summarization of any Outlook conversation containing PHI
- Copilot transcription or summarization of Teams meetings discussing PHI
5.3 Patient-Facing AI Disclosure
Per HIPAA §164.520 (Notice of Privacy Practices) and applicable state law, any AI tool that interacts directly with a patient (chatbot, intake assistant, AI scheduler) must be disclosed in the Notice of Privacy Practices and any patient-facing consent forms. The disclosure must specify what AI is used for, what data it accesses, and what controls govern that access.
5.4 Clinical Decision Support
AI tools used to support clinical decisions are subject to additional governance under § 164.312(c)(1) (integrity controls) and the FDA's Software as a Medical Device (SaMD) guidance where applicable. Clinical decision support tools require executive sign-off from the Chief Medical Officer in addition to the CISO.
5.5 Sub-Processor Disclosure
Per GDPR Article 28 (where applicable to EU-resident patient data) and customer DPAs, the AI sub-processor list maintained at acmehealth.example/trust must be updated within 30 days of any new AI vendor onboarding. Customers retain the right to object to a new sub-processor and have 30 days from notification to do so.
Section 14: Deployed Agent Policy
14.1 Scope
This Section applies whenever Acme Health deploys, builds, hosts, or integrates an AI agent — including no-code agents created in platforms such as Microsoft Copilot Studio, Salesforce Agentforce, Gemini Enterprise custom agents, or ServiceNow Agentic AI — and whenever Acme Health consumes Model Context Protocol (MCP) servers or participates in agent-to-agent (A2A) discovery chains.
14.2 Agent Registration
Every agent Acme Health deploys must be registered in the agent inventory maintained by the CISO's office before it is enabled in production. Registration captures: agent name, owner, hosting platform, declared tools and skills, identity model, data classes the agent is permitted to access, and the application boundary the agent operates within.
14.3 MCP Server Allowlist
Acme Health maintains a vetted allowlist of permitted MCP servers. Connecting an agent to any MCP server not on the allowlist is prohibited. The allowlist is reviewed at the same cadence as the AI Tools Registry. Allowlisted MCP servers are subject to re-validation at minimum quarterly to detect Discovery Poisoning.
14.7 Kill-Switch
Every deployed agent must be subject to a documented kill-switch — a single action by the CISO's office (or designee) that immediately disables the agent across all environments. The kill-switch procedure is tested quarterly.
14.9 Agent-Authored Code Governance
Where Acme Health engineering teams use coding agents that author production code (Cursor, Claude Code, Anthropic Antigravity, Gemini CLI, GitHub Copilot Workspace, or equivalent), the following controls apply: (a) policy provenance — the coding agent's policy and security context must be documented; (b) named human accountability — every agent commit, PR, or deployment must carry a named human owner; (c) change-management evidence — coding agents must operate within the same code review and CI/CD gating as human-authored code; (d) runtime IAM separation — the runtime IAM the coding agent holds at authoring time must be distinct from the runtime IAM the deployed code itself runs under; (e) validator-pair attestation — where a coder-validator agent pair is used, the validator's attestations are recorded as evidence but do not substitute for the human accountability requirement.
Appendices
Appendix A: Approved AI Tools Registry (sample excerpt)
The full registry is maintained by the CISO's office. The excerpt below is illustrative of the format.
| Tool | Category | Approved Use Cases | Data Class Limit | Approved | Review |
|---|---|---|---|---|---|
| Microsoft 365 Copilot | Embedded productivity | Document drafting (non-PHI), meeting summarization (non-PHI) | Confidential (non-PHI) | 2026-05-15 | 2026-11-15 |
| Notion AI | Embedded knowledge | Knowledge-base search, document summarization | Confidential | 2026-04-01 | 2026-10-01 |
| Cursor | Direct developer tool | Code authoring with mandatory PR review; agentic features DENIED until § 14.9 controls in place | Confidential (no customer identifiers) | 2026-04-15 | 2026-07-15 |
| ChatGPT Enterprise | Direct conversational | Research, drafting, brainstorming (no Restricted data) | Confidential (no PHI) | 2026-04-01 | 2026-10-01 |
Same data, different artifact, different audience.
Derived from the same audit as § 01 above. CEO voice (first-person plural), plain English, no jargon, no CEL or SPIFFE acronyms. The document a CEO actually hands up to the board audit committee — the Executive Risk Report stays in the CISO’s briefcase as the supporting detail.
Board Memo — AI Governance Risk
Acme Health, Inc.
To the Board of Directors and Audit Committee of Acme Health
We have completed a comprehensive review of our artificial intelligence risk posture and identified a high level of exposure that requires immediate attention. Our most significant concern is the unmanaged use of Microsoft 365 Copilot within our digital environment that contains protected health information. Currently, this tool is active without the necessary policy safeguards or specific legal agreements required for healthcare data. This creates a situation where a single configuration change could trigger a mandatory data breach notification under federal law. The lack of a formal acceptable use policy for AI tools also threatens our upcoming security audit certifications. We are taking decisive action to pause these services until we have established the proper governance, legal disclosures, and employee training to ensure we utilize these technologies safely and remain in full compliance with healthcare privacy regulations.
What we found
- We discovered that Microsoft 365 Copilot is active on our primary data systems and accessing protected health information without the required HIPAA business associate agreements or usage policies in place.
- Our engineering team is using an AI coding assistant called Cursor that may be transmitting sensitive source code and customer identifiers to external providers without proper data protection oversight.
- We currently lack a formal AI Acceptable Use Policy and a public list of AI sub-processors, which will result in a failure to meet our SOC 2 audit requirements and international data privacy obligations.
What we are doing
- Within the first week, our Chief Information Security Officer is disabling Copilot across the entire organization until we have adopted formal safety policies and executed the necessary business associate agreements.
- By the second week, our Legal team will implement a new AI Acceptable Use Policy to provide clear rules for how our employees interact with these tools, with sign-off from the CISO, General Counsel, and CEO.
- By the fourth week, our Legal team will publish a formal disclosure of all AI vendors we use and notify enterprise customers, ensuring we meet our transparency requirements for regulators and contractual obligations to the business.
I welcome the Board's questions regarding these remediation steps at our next session.
Generate your own — for your company, department, or line of business.
$99/month. Same artifacts, customized to your industry, jurisdictions, frameworks, and AI tool inventory. Run the free Shadow AI Risk Calculator first to see the assessment style; the full subscription produces all three deliverables (plus the verification URL) in minutes from a guided assessment.