SIG Lite and SIG Core built the vendor-security baseline.
They were never designed for shadow AI.
By Lindsay Hiebert · Founder · CISSP
Shared Assessments’ SIG questionnaires are the reason third-party risk management exists as a discipline. We respect the lineage. Every CISO reading this page has either filled out a SIG or reviewed one. So let’s start with what SIG does well — and then get honest about the eight places it structurally cannot reach in 2026.
What SIG Lite and SIG Core got right.
Shared Assessments published the first SIG in 2009. Over seventeen years it became the industry’s common language for vendor security due diligence — the thing you could send to a vendor in Chicago or Frankfurt or Singapore and know they’d understand what you were asking.
SIG Core (roughly 900 questions) gave you exhaustive coverage. SIG Lite (roughly 300 questions) gave you a workable subset for less-critical vendors. Between them, they standardized due diligence across thousands of Fortune 500 vendor-management programs.
Every meaningful benefit of the modern GRC industry — Vanta, Drata, OneTrust, BitSight, SecurityScorecard, UpGuard, Panorays — exists because SIG defined what “a vendor security assessment” meant in the first place.
None of what follows is a criticism of SIG as a framework. It’s a description of the gap between what SIG was designed for — third-party vendor assessment at enterprise procurement scale — and what CISOs actually need in an era where their employees adopt AI tools faster than procurement can inventory them.
Where SIG Lite and SIG Core structurally cannot reach.
Snapshot, not stream
SIG is answered once, reviewed once, filed. The AI tool landscape changes weekly — OpenRouter, Fireworks, Deep Infra, MiniMax, Moonshot all launched or materially updated inside a single quarter. A vendor SIG you approved in Q1 is stale by Q2. SIG has no refresh mechanism, and the GRC platforms that automate SIG only automate the distribution, not the recency.
Self-reported, not verified
The vendor fills out their own SIG. There is no independent check that what they wrote is still true in production. Fifty-nine percent of employees already hide their AI usage from IT — the idea that their employer's vendor disclosures are more honest than their own employees' disclosures is a hope, not a control.
Vendor-scoped, not employee-scoped
SIG asks what a vendor does. It does not ask what your employees do with AI tools their employer never approved. That gap — employee-initiated shadow AI — is the dominant AI risk surface of 2026, and SIG is structurally blind to it. You cannot send a SIG to an AI tool your staff is using on a personal account.
Generic, not regulation-anchored
SIG questions are framework-agnostic. They ask about access controls, encryption, and incident response in the abstract. They do not cite HIPAA §164.502(e), SOC 2 CC6.1, NIST AI RMF GOVERN-1.4, or EU AI Act Article 5. You cannot hand a filled-out SIG to a regulator as evidence of compliance with a specific clause. It is a due-diligence artifact, not a compliance artifact.
Slow — measured in weeks, not seconds
The real-world SIG cycle is 8–12 weeks per vendor: send, fill (2–6 weeks), review (1–3 weeks), remediation negotiation, sign-off. In a world where new generative AI tools ship monthly and employees onboard them in under an hour, an 8-week assessment cycle is architecturally incompatible with the risk it is trying to cover.
Expensive to administer
Vendor risk teams spend hundreds of hours per year chasing SIG responses, triaging inconsistencies, and maintaining vendor risk registers. The entire GRC automation industry exists because the manual SIG process is unscalable. The cost is real, even when you don't see a line item — it's buried in headcount.
Does not generate customer-side artifacts
A completed SIG is a spreadsheet filed inside your vendor management system. It is not an AI Acceptable Use Policy for your own employees. It is not a board-ready risk report. It is not a 90-day prioritized action plan. It is not a compliance evidence package you can attach to a SOC 2 audit response. It is, strictly, an inbound due diligence record.
Assumes procurement already knows the vendor exists
This is the deepest gap. SIG works when procurement knows a vendor exists. Shadow AI is, by definition, employees using AI tools procurement has never heard of — signed up with personal email, paid for on a personal card, connected through browser extensions or IDE plugins. You cannot send a SIG to a vendor you do not know about, on an account you do not own, used by employees who are hiding the usage. The SIG paradigm is structurally incapable of reaching the majority of the actual risk surface.
Vendor self-assessments regarding security are a
gaping hole of uncertainty.
Every SIG response, every CAIQ response, every filled-in vendor risk questionnaire, every “trust us” badge on a security page, and every line item in a GRC platform has the same load-bearing assumption: the vendor answering the question is telling the truth, in full, about their current state, against an accurate internal model of their actual controls. That assumption is the entire paradigm. It is also almost never verified.
The incentive is aligned the wrong way. The vendor wants the deal. The vendor’s sales and security teams are incentivized to make the SIG response look as clean as possible. The buyer’s vendor risk team is understaffed, triaging hundreds of vendors, and has no realistic mechanism to falsify any specific claim short of a full third-party audit nobody can afford. The result is a compliance artifact that reads like due diligence but functions as marketing.
History is not kind to this assumption. SolarWinds had clean security attestations until the build pipeline compromise. Okta had SOC 2 reports in hand through multiple disclosed breaches. MOVEit was a trusted file transfer vendor up to the supply-chain attack that touched hundreds of downstream customers. LastPass, Change Healthcare, Snowflake, Kaseya — every one of them had completed vendor questionnaires on file at their customers before the incident. The SIG paradigm did not catch any of them. It cannot. That is not its job. A questionnaire is a point-in-time self-description, not a continuous observation.
Now add AI to this. Vendors self-attest that they have an AI Acceptable Use Policy — yes or no. They self-attest that they do not train on customer data — yes or no. They self-attest that their AI sub-processors are governed — yes or no. None of these claims are verifiable by the buyer. The AI layer compounds the uncertainty because the vendor may not even know what AI tools their own employees are using. Fifty-nine percent of employees hide their AI usage from their own IT departments. A vendor whose own employees are hiding Shadow AI from their own CISO cannot give you an accurate SIG response about AI governance, no matter how honestly they try.
Self-reported compliance is a promise, not proof. SanctumShield inverts the paradigm: instead of asking a vendor what they claim, we match your own observed network traffic against a 64-domain AI endpoint registry and produce findings anchored to real regulatory clauses. Observation over attestation. Evidence over promise. That is the only way out of the uncertainty hole.
SIG does not produce a customized, targeted AI risk profile assessment for your company.
That is the disease.
Every other gap on this page is a symptom.
A SIG response is a collection of self-reported vendor claims against a generic control checklist. It is not an assessment. It does not analyze your industry. It does not reflect your jurisdictions, your headcount, your data classification, your regulatory exposure, the AI tools your employees are actually using, or the observed traffic on your own network. It does not produce a risk score. It does not produce findings. It does not produce a headline a board can read. It does not produce a prioritized action plan. It does not anchor anything to the specific clauses of HIPAA, SOC 2, GDPR, or the EU AI Act that would apply to your company.
Which means the artifact the CISO, the board, the auditor, the insurance broker, and the enterprise customer in procurement all actually need — a customized, targeted AI risk profile assessment of this specific company, right now, against the regulations that actually apply, with findings and a plan — does not exist inside the SIG paradigm, does not exist inside the vendor-risk-automation paradigm, does not exist inside the compliance-automation paradigm, and does not exist inside the enterprise GRC paradigm. It is the biggest gap. It is the gap that leaves a company and its security governance materially exposed. And it is the exact artifact SanctumShield generates in minutes — from a guided security assessment any CISO can run themselves.
The board will not accept a control checklist as an answer. The regulator will not accept a vendor questionnaire as evidence. The auditor will not accept a filed SIG response as a risk assessment. The cyber insurance underwriter will not accept a GRC dashboard as proof of governance. They want a real assessment. The existing paradigm structurally cannot produce one at the speed, price, or customization the mid-market buyer needs. SanctumShield does. That is the entire reason the product exists.
The lack: eight things a CISO needs that no SIG can deliver.
An AI-risk-ready-proof Acceptable Use Policy
SIG does not produce an AUP. It asks you whether you have one. No SIG, no SIG Lite, no SIG-based GRC platform, and no vendor questionnaire of any kind generates an AI-risk-ready-proof AUP — the kind of policy that cites HIPAA §164.502(e), SOC 2 CC6.1, EU AI Act Article 5, and NIST AI RMF GOVERN-1.4 in the exact sections those clauses govern, customized to your industry, size, and jurisdictions. SanctumShield generates that document in minutes — from a guided security assessment any CISO can complete themselves. 14 sections (now including Section 14 — Deployed Agent Policy), 3 appendices, counsel-reviewable, board-ready.
A board-ready risk score
SIG produces a spreadsheet for the vendor management system. SanctumShield produces a score, a headline, five regulation-anchored findings, and a 90-day prioritized action plan. Written for the audit committee, not for the GRC platform.
Observed-network evidence
SIG asks vendors to self-report. SanctumShield matches your own firewall, proxy, or DNS logs against a 64-domain AI endpoint registry so you can see what's actually happening — not what a vendor says is happening.
Regulation-anchored citations
SIG questions are generic. SanctumShield cites real clauses — HIPAA §164.502(e), SOC 2 CC6.1, EU AI Act Article 5, NIST AI RMF GOVERN-1.4 — that your counsel can verify against source law and that you can hand to a regulator.
Shadow AI coverage
SIG covers known vendors. SanctumShield catches unknown shadow AI via endpoint pattern matching against the domains your employees actually visit. The 80%+ of enterprise AI usage that procurement has never heard of (Zluri State of AI in the Workplace 2025) lives here.
Continuous landscape maintenance
SIG is a static questionnaire that ages the moment it's filed. SanctumShield updates the endpoint registry, regulation citations, tool catalog, and policy prompts every month — so your audit in month 2 reflects month 2, not month 1.
Speed measured in minutes, not weeks
SIG takes 8–12 weeks per vendor. SanctumShield generates both deliverables — Executive Risk Report and AI Acceptable Use Policy — in minutes from a guided security assessment, not 8–12 weeks of vendor back-and-forth. The asymmetry is the point.
Price that matches the buyer
SIG administration burns hundreds of hours of vendor risk team time per year. SanctumShield is $99 per month, an expense-report-level decision, explicitly priced so the CISO at a 200-person SaaS does not need to open a procurement case to buy it.
“Does your SIG response give you an AI-risk-ready-proof Acceptable Use Policy?”
It does not. SIG was never designed to generate policy artifacts. It was designed to collect self-reported vendor claims. If the AUP your board is waiting on exists only as a line item on a checklist — “do you have an AI policy, Y/N” — you do not have an AUP. You have a promise to write one. SanctumShield writes it for you, with real clause citations, in the time it takes to make a coffee.
What SanctumShield does that SIG was never meant to.
| Dimension | SIG Lite / SIG Core | SanctumShield |
|---|---|---|
| Assessment cadence | Point-in-time snapshot, filed and aged | Continuously refreshed — monthly updates to endpoints, citations, catalog, and prompts |
| Data source | Self-reported by the vendor | Observed-network (your own firewall / proxy / DNS logs) matched against a 64-domain AI endpoint registry |
| Scope | Known vendors in your inventory | Known vendors PLUS shadow AI tools procurement has never heard of |
| Anchoring | Framework-generic questions | Regulation-anchored to HIPAA, SOC 2, GDPR, NIST AI RMF, EU AI Act, CCPA, ISO 27001 — with real clause citations |
| Turnaround | 8–12 weeks per vendor | Minutes from click to deliverable |
| Artifacts produced | Filed spreadsheet for the vendor management system | AI Acceptable Use Policy + Executive Risk Report + 90-day action plan + Tools Registry, in 4 formats (Word, MD, HTML, text) |
| Who it serves | Third-party risk management team | CISO, IT Director, board, compliance, auditor, cyber insurance broker — all from one run |
| Cost | Hundreds of hours of internal risk team time per year, plus GRC tooling | $99/month month-to-month, cancel anytime — or $0 for the free Shadow AI Risk Calculator |
SanctumShield is not a replacement for SIG. A mature vendor risk program uses both: SIG for inbound due diligence on known vendors, and SanctumShield for continuous AI governance coverage of the shadow AI your SIG process cannot see. The two are complementary. We’re explicit about that because “we replace SIG” would be the wrong claim — and because SIG’s actual domain (vendor due diligence) is not where your worst AI risk lives anymore.
SIG is not the only paid solution that fails this buyer. The enterprise GRC category shares the same structural flaws — at a much higher price.
The category that sells “AI governance” to the Fortune 500 overlaps heavily with the category that sells vendor risk, compliance automation, and security posture rating. Every one of these tools exists for a real reason — and every one of them fails the 200-person healthcare SaaS CISO in a way that is structural, not incidental.
Priced for the Fortune 500 AI governance office. Requires a dedicated ML ops team to deploy. Features target model risk management for trained-in-house models — not the SMB problem of 'which SaaS tools are my employees using.' A 200-person healthcare SaaS does not train models. It uses twelve AI SaaS tools it cannot inventory.
Requires a next-generation firewall or SASE deployment already in place. Assumes you already own the Palo Alto / Netskope / Zscaler platform — the AI module is an add-on. For a company whose entire security stack is 'Cloudflare plus Google Workspace,' the floor price is the full platform purchase. The module is the cheap part.
Endpoint agents see what runs on the endpoint. They miss browser-based AI tools (ChatGPT, Claude, Perplexity, Gemini) that your employees access through normal web browsing — which is the majority of shadow AI. And they require a managed endpoint program, which SMBs often don't have.
Automate the SIG distribution and scoring cycle. They inherit every structural gap in SIG itself (sections 02 above) — snapshot, self-reported, vendor-scoped, slow — and then add an enterprise-grade price tag on top. Faster questionnaire fatigue is not the fix.
Built to get you through a SOC 2 audit, not to govern AI. They checklist-automate the controls you claim to have, but do not generate an AI Acceptable Use Policy, do not analyze your logs for shadow AI, and do not produce regulation-anchored findings. The SOC 2 auditor will ask about AI governance; Vanta will not have the answer.
Generic framework and PowerPoint deliverable, produced over 6–12 weeks by an associate team. The output is real, but it is a one-time artifact that ages immediately, does not include log analysis, and lives in a binder. Repeat the engagement next year for another $50K+.
Produces a legally defensible policy, which is real value. But no log analysis, no ongoing registry maintenance, no free calculator for the team, and the policy itself ages as regulations change — bringing you back to the same firm next year. And counsel is not a discovery tool; counsel presumes you already know what you're governing.
Every one of these solutions is built for a buyer who already has an enterprise security program. They are the upgrade paths for teams that already have Palo Alto, or already have OneTrust, or already retain Deloitte. They are legitimate tools for that buyer.
The 50–2,000-employee company does not have those things. Their entire security headcount is one or two people. They have a board-pressure conversation about AI governance happening right now, and a SOC 2 auditor asking them next quarter, and a cyber insurance renewal questionnaire in the inbox. They do not have an enterprise platform to bolt an AI module onto. They need an answer they can expense, run in the next hour, and forward to their board before Friday. That is the buyer SanctumShield was built for — and the buyer the enterprise GRC category has always structurally declined to serve.
Sticker price is only the first line of the cost.
A $150K AI governance platform is not a $150K decision. It’s a $150K licence plus months of deployment, plus ongoing analyst time to extract anything useful, plus the cost of everything the tool still cannot tell you. The headline number is the smallest part. Here is what the invoice hides.
Time to first value — measured in quarters, not hours
Enterprise platforms require scoping, procurement, security review, legal negotiation, deployment, integration with your identity provider, policy import, and user training before they produce the first artifact. Six to twelve weeks is a fast deployment. For a CISO with a board meeting in 30 days, 'time to first value of 90 days' is a failure mode, not a feature.
Analyst time to interpret the result
The output of BitSight, SecurityScorecard, OneTrust, or Vanta is raw data. Someone has to look at it, decide what's important, triage findings, write the narrative, and produce the thing a board will actually read. That work — usually 20–40 hours per report — is never on the invoice. It comes out of your team's week.
Identifying what is missing
Every legacy tool leaves gaps, and the CISO is responsible for knowing what the tool does not tell them. What AI tools were not inventoried? What regulations were not mapped? What shadow AI was invisible to the endpoint agent? Answering 'what is my tool missing?' requires hours of manual cross-referencing every cycle — work the tool itself will not do.
Complexity and operational friction
Most enterprise GRC tools have hundreds of configurable controls, dozens of dashboards, a permissions model, and a rules engine. The learning curve is real. The maintenance burden is real. The cost of onboarding a new analyst to the tool is real. Simple tools generate reports; complex tools generate full-time jobs.
Limited scope, repeatedly extended with add-ons
The AI module is an add-on to the vendor risk platform. The regulatory feed is an add-on to the compliance platform. The log analysis is an add-on to the firewall. The policy generator is an add-on to the GRC suite. By the time you have coverage approaching what SanctumShield delivers out of the box, your annual contract has doubled — and the modules still do not talk to each other.
Error-prone manual synthesis
Because no one tool covers the full picture, the CISO ends up stitching together outputs from three or four tools into a single board-ready narrative. Every stitch is an opportunity for a copy-paste error, a stale data point, a regulation that was updated yesterday, or a framework citation that no longer matches the current version of the clause. The artifact is only as reliable as the analyst who assembled it at 11pm the night before the meeting.
No continuous regulatory monitoring
Almost none of the legacy tools track the regulatory environment for you. They check-the-box against the framework version that existed when they last shipped. When EU AI Act delegated acts ship, when state AI disclosure laws pass, when NIST RMF profiles update — you find out from a blog post, not from the tool. The 'continuous compliance' claim is continuous against a frozen target.
No actual risk assessment — just a control checklist
Most of the category produces a control inventory: you either have the control or you don't. That is not a risk assessment. A risk assessment is regulation-anchored findings, business-impact analysis, prioritized mitigations, and a 90-day action plan with named owners. That is the artifact the board, the auditor, and the cyber insurance broker all actually want — and the legacy tools do not generate it.
A CISO’s ego can’t save them.
The people who buy SanctumShield’s competitors are some of the smartest, most technically credible people in the enterprise. They have CISSPs, CISMs, Cisco CCIEs, fifteen-year resumes, breach war stories, and board trust. They are not wrong to believe they can run a governance program manually, with a binder of policies and a quarterly vendor review and a sharp analyst and the force of personal experience.
They are just structurally outmatched by the operational velocity of shadow AI. No amount of credential, no amount of tenure, no amount of personal skill changes the arithmetic: new AI tools ship every week, regulations update every month, employees adopt tools faster than procurement can see them, and the artifact a board wants — a regulation-anchored, log-verified, framework-mapped, 90-day-actionable risk report — cannot be produced manually inside the time a CISO actually has.
Ego — the confidence that experience and effort will bridge the gap — is the most dangerous line item in an AI governance program. It is the reason programs look fine in Q1 and have a regulator asking questions in Q3. It is the reason CISOs spend Saturday morning cross-referencing an EU AI Act citation against a vendor’s SIG response. It is the reason the board report is late.
SanctumShield exists because the work is not hard — it is just too much to do by hand, every week, forever. The answer is not a smarter CISO. It is a tool that already did the work.
Run it on your own environment in the next five minutes.
The free Shadow AI Risk Calculator takes twelve questions. The paid Executive Risk Report takes a company profile, a list of AI tools, and an optional paste of your firewall or proxy hostnames — and gives you a board-ready artifact in minutes. Neither requires a vendor questionnaire, a procurement cycle, or a security review.