“Every CISO I talk to has discovered some form of shadow AI,” Gartner vice president analyst Andrew Walls told CSO Online in March 2026. Vendors are embedding AI features into existing products without flagging it. Employees are copying sensitive data into public models. Security teams are finding out after the fact. The question is no longer whether shadow AI exists inside your organisation. It is what you do when you find it.

The five-step shadow AI runbook at a glance

  1. Triage the discovery. Assess what data was exposed before you react.

  2. Shut it down or bring it in. Decide whether to block, sanction or formalise the use.

  3. Build the inventory. Map every AI tool, sanctioned and unsanctioned.

  4. Classify data for AI use. Define what can and cannot enter an AI tool.

  5. Treat AI data spills as standard incidents. Same rigour as any other data breach.

A note on jurisdictions. This runbook is jurisdiction-agnostic. The data points are drawn from Australian (CyberCX), global (Netskope), and US (CSO Online interviews) sources, and the structural advice applies anywhere. Specific regulatory thresholds for breach notification, data classification and consumer rights vary by region and sector.

Shadow AI is now a real incident category

AI data spills accounted for approximately 3% of all incidents CyberCX’s Digital Forensics and Incident Response team handled in 2025. That figure comes from a single provider’s case mix, not the entire market, but it marks the first time a major Australian incident response firm has broken out AI data spills as a standalone category in its annual threat report. CyberCX noted that in many cases, the organisation had no enterprise licensing for the AI platform, no data-loss prevention controls in place, and no network logging, making it impossible to identify or quantify the spillage.

The wider data is more alarming. Netskope’s Cloud and Threat Report 2026 found that GenAI-related data policy violations more than doubled year-over-year. The average organisation now detects 223 monthly attempts by employees to include sensitive data such as regulated data, intellectual property, source code and credentials in GenAI prompts or uploads. The top 25% of organisations see 2,100 such incidents per month. Netskope also found that 47% of GenAI users access tools through personal, unmanaged accounts, with security teams often having no visibility over what data those accounts are processing.

Step 1: Triage the discovery

When shadow AI use surfaces, the first job is to assess what actually happened, not to punish the user. “The first instinct is to react. And that’s never a good thing in cybersecurity,” Olivia Rose, IANS faculty member and founder of Rose CISO Group, told CSO Online. “You need to think through your answer holistically and look at the level of risk to the organization before you respond.”

Walls described a structured risk-assessment approach: understand what data was exposed (personal, financial, proprietary, regulated), what the AI provider does with inputs (training, storage, retention policies), and whether a notifiable breach has occurred under applicable privacy or sector regulations. The notifiable-breach test varies materially across regimes. Australia’s Notifiable Data Breaches scheme under the Privacy Act, the GDPR’s 72-hour reporting obligation, US state notification laws, and sector-specific rules in financial services and healthcare each apply different thresholds and timelines. Do not assume a single global standard.

Not every instance of shadow AI is an incident. An employee using a public chatbot to rewrite a meeting agenda is a policy issue. An employee pasting client financial data into the same tool is a data spill requiring forensics, notification assessment, and potentially regulatory reporting.

The triage questions are straightforward. Was sensitive, regulated or privileged data involved? Does the AI provider’s terms of service allow training on user inputs? Can the data be retrieved or deleted? Is there a contractual, regulatory or legal notification obligation? These questions belong in a documented playbook, not worked out on the fly during an incident.

Step 2: Shut it down or bring it in?

The CSO Online analysis highlights a decision fork that most incident response playbooks do not yet cover: when to shut an unapproved tool down entirely, and when to pull a promising use case into the formal governance process.

If the tool was used with regulated data, client-privileged material, or data subject to contractual restrictions, the response is clear. Treat it as an incident. Contain, assess, notify where required, and enforce the policy.

If the tool was used for a legitimate productivity purpose with low-sensitivity data, a different response may be warranted. Vandy Hamidi, CISO at BPM (a US tax, advisory and accounting firm), described how his team handles the formalisation path when a promising shadow AI use case surfaces: “Our PMO process includes a formal information security review, a legal review, data privacy review. It includes a return on investment as well to see if this tool makes sense. And then it either gets approved or it doesn’t.” The point is not to drag every shadow AI discovery through governance, but to have a real path that takes weeks, not months.

Hamidi also flagged the underlying driver behind shadow AI: “If a company as a whole is slow on the adoption curve, it effectively forces the use of shadow AI.” Blanket bans without sanctioned alternatives create the exact pressure that pushes employees toward unapproved tools.

Some sectors take the opposite approach with good reason. Defence contractors, regulated healthcare providers, and legal practices handling client privilege often choose strict blocking and accept the productivity trade-off in exchange for cleaner audit trails and tighter data control. The right answer depends on the organisation’s risk profile and what its sanctioned alternatives can actually deliver. There is no universal rule; there is only a calibration question.

Step 3: Build the inventory you should already have

Discovery is reactive. The goal is to move upstream, which means building an inventory of every AI tool in use, sanctioned or not, and classifying them by data sensitivity, regulatory exposure, and vendor terms.

The inventory should cover embedded AI features in existing SaaS tools (Microsoft 365, Salesforce, ServiceNow and similar platforms now ship with AI features enabled by default), standalone AI applications employees have signed up for independently, and any internal or custom AI models deployed by engineering or data teams. Many organisations will find their AI footprint is significantly larger than they assumed.

A TrendAI global study of 3,700 decision-makers found only 38% of organisations have comprehensive AI policies in place. A policy without an inventory is unenforceable, because the security team has no way to know which systems the policy actually covers.

Step 4: Classify data for AI use

Not all data carries the same risk in an AI context. A data classification scheme designed for traditional storage and access controls may not map cleanly to AI use cases, where the risk is not unauthorised access but permanent disclosure to a third-party model.

At minimum, organisations need a classification tier that distinguishes data that can be used with public AI tools (already public, non-sensitive), data that can be used with enterprise-licensed AI tools under contractual protections, data that must never leave the organisation’s controlled environment, and data where AI processing is prohibited by regulation or contract (such as health records, legal privilege, or client data under no-AI clauses).

This classification should be embedded in onboarding, refreshed annually, and reinforced through the tooling layer. DLP policies should flag attempts to paste classified data into known AI endpoints.

Step 5: Treat AI data spills as standard incidents

CyberCX’s recommendation is to treat AI data spills with the same rigour as any other data breach: documented forensics, quantification of exposure, regulatory notification assessment, and a clear remediation record. The challenge is that many organisations lack the logging needed to even identify what was sent to which AI tool and when.

For CISOs, that means three concrete logging requirements. First, network logging should capture all access to known AI endpoints, including the embedded AI features in approved SaaS platforms. Second, endpoint monitoring should detect clipboard activity to known AI domains, with thresholds calibrated to avoid alert fatigue. Third, browser-based activity logs should record file uploads to public AI tools, since file-based exfiltration is harder to detect than text prompts. Without these three layers, the forensics work after a spill becomes guesswork.

Incident response playbooks also need an AI-specific workflow with triage questions, escalation paths, regulatory notification templates, and decision trees for when to involve legal, privacy and HR. Generic data breach playbooks miss the unique aspects of AI spills, particularly the difficulty of “un-sending” data once it has been ingested by a third-party model.

Olivia Rose offered a useful frame for the post-incident conversation with leadership: “Never let a good breach go to waste. You can leverage it to get budget, resources, support for the cybersecurity organization.” Each incident is also evidence that the existing controls were insufficient. CISOs should use that evidence to argue for the inventory, classification, logging and governance investments that prevent the next incident.

Shadow AI is manageable if CISOs lead

Shadow AI is not avoidable. Employees will use AI tools because they are productive, accessible and, in many cases, actively encouraged by leadership. The CISO’s role is not to block AI but to ensure it is used with visibility, within boundaries, and with a clear response plan when those boundaries are crossed. Organisations that build inventories, fast-track approved tools, classify data for AI use, and treat AI data spills as real incidents will find the problem manageable. Those that rely on policy documents alone will keep discovering shadow AI after the damage is done.

Sources

  • CSO Online, “The CISO’s guide to responding to shadow AI,” 25 March 2026. csoonline.com

  • CyberCX, “Shadow AI: Why your biggest AI threat may come from within,” 27 March 2026. cybercx.com.au

  • CyberCX, 2026 Threat Report, 1 March 2026. cybercx.com.au

  • Netskope Threat Labs, Cloud and Threat Report 2026, January 2026. netskope.com

  • TrendAI/Trend Micro, “Organizations Overlook AI Risk as Governance Fails to Keep Up,” 25 March 2026. prnewswire.com

  • Cybersecurity Dive, “Risky shadow AI use remains widespread,” January 2026. cybersecuritydive.com

  • IT Pro, “Generative AI data violations more than doubled last year,” 12 January 2026. itpro.com