Why Shadow AI Is Now a Material M&A Risk
Mergers and acquisitions teams have long scrutinized software licenses, data privacy practices, and cybersecurity posture during due diligence. In 2024 and beyond, a new category of hidden liability demands the same rigor: shadow AI. Shadow AI refers to the unsanctioned, ungoverned use of generative AI tools — ChatGPT, Claude, Gemini, Copilot, Perplexity, and dozens of others — by employees who adopt them independently without IT or security approval, policy guidance, or audit infrastructure.
For acquirers, this matters for a straightforward reason: when employees at a target company paste customer data, trade secrets, financial projections, or regulated personal information into a commercial AI tool, that data may be transmitted to third-party servers, used to train future models, or logged in ways that violate contractual confidentiality obligations. None of this shows up in a standard SOC 2 report or penetration test. It requires a specific, structured investigation.
The scale of the problem is not hypothetical. Industry surveys consistently show that 50 to 70 percent of knowledge workers have used at least one AI tool without explicit employer authorization. In companies that lack formal AI governance programs — which describes most targets below $500 million in revenue — shadow AI usage is effectively invisible to leadership. Acquirers who skip this step are inheriting liabilities they cannot yet quantify.
What Shadow AI Looks Like Inside a Target Company
Shadow AI rarely looks like reckless behavior from the inside. It looks like productivity. A sales engineer uses ChatGPT to draft RFP responses and pastes in product specifications and pricing models. A junior lawyer uses Claude to summarize depositions that include client-privileged information. A finance analyst feeds raw earnings projections into an AI tool to generate commentary for an investor deck. Each of these employees believes they are saving hours of work. None of them have been told that their inputs may be retained.
The tools themselves vary widely in their data handling policies. Some enterprise tiers of AI products offer opt-out from training data retention. Most free and consumer-tier versions do not. A target company may have a mix of teams using different tiers, different tools, and different personal or corporate accounts — often without any centralized awareness. IT may have blocked a tool at the network level, only for employees to access it via mobile data or personal laptops. This is the operational reality acquirers need to probe.
It is also worth noting that shadow AI is not limited to large language models accessed through a browser. Browser-based AI assistants, AI-enabled plugins in productivity suites, code completion tools like GitHub Copilot used without enterprise agreements, and AI-powered customer service platforms integrated by line-of-business teams without security review all fall under the same umbrella. A thorough due diligence process must account for the full spectrum.
The Legal and Regulatory Exposure Shadow AI Creates
The legal dimensions of shadow AI in an acquisition context span at least four distinct risk categories. The first is data protection and privacy law. If employees at the target have entered personal data belonging to EU residents into a tool whose vendor lacks appropriate data processing agreements, the acquiring company inherits potential GDPR liability, including the possibility of supervisory authority inquiries triggered by the transaction itself. The same logic applies to CCPA, HIPAA, and sector-specific regulations depending on the industry.
The second category is confidentiality and trade secret exposure. If a target company's employees have shared proprietary source code, customer lists, product roadmaps, or merger-related financial information with a commercial AI tool, that information may be outside the company's exclusive control at the moment of signing. Depending on the terms of the acquisition agreement, this could constitute a breach of representations about data security practices or intellectual property ownership.
Third is contractual compliance. Many enterprise customer agreements and NDAs contain explicit data handling obligations that prohibit the sharing of contract-covered information with third-party SaaS platforms. Shadow AI usage can create downstream breach scenarios that the target company does not yet know about. Fourth, and increasingly significant, is emerging AI-specific regulation. The EU AI Act, sector-specific AI guidance from the FCA and SEC, and state-level bills in the United States all create compliance obligations that may already apply to the target's current AI usage patterns — or will within the integration timeline.
The Due Diligence Checklist: Eight Areas to Investigate
The following checklist is designed for the security, legal, and compliance workstreams of an M&A due diligence process. It is not exhaustive, but it covers the areas where material risk is most likely to surface.
One: AI Policy Inventory. Request copies of any written AI usage policies, acceptable use policies, or AI governance frameworks. Assess whether they cover generative AI specifically, whether they have been communicated to employees, and when they were last updated. A policy written before 2023 almost certainly predates the current risk landscape. Two: Approved Tool Register. Ask whether the target maintains a list of approved AI tools and vendors. Assess whether this list exists in writing, who owns it, and whether there is a review and approval process for new AI tool adoption. Three: Browser and Endpoint Telemetry. Request DNS query logs, browser proxy logs, or endpoint DLP reports for the past 12 to 24 months. Look for patterns of access to known AI platforms — particularly free consumer tiers — from company-managed devices. Four: Network-Level Controls. Determine whether any AI tools have been blocked or restricted at the firewall or proxy level. Assess whether those controls are effective or easily circumvented. Five: Vendor Agreements for AI Tools. For any AI tools that are formally deployed, obtain vendor agreements and data processing addenda. Review data retention terms, training opt-out provisions, and subprocessor disclosures. Six: HR and Training Records. Determine whether the target has provided any employee training on AI data hygiene. Review onboarding materials and security awareness training curricula for AI-specific content. Seven: Incident History. Ask specifically whether any data incidents, near-misses, or internal complaints have involved AI tools. Assess whether the target has a process for employees to report AI-related concerns. Eight: IP Chain of Custody. For software companies in particular, determine whether AI-generated code has been incorporated into products and whether the target has documented its AI-assisted development practices in a way that supports copyright ownership claims.
Red Flags That Should Delay or Reprice a Deal
Not every shadow AI finding warrants the same response. Some gaps are remediable through standard post-close integration work. Others represent material liabilities that should affect deal structure, price, or timeline. The following scenarios fall into the latter category.
High-volume uncontrolled AI usage in regulated data environments is the clearest red flag. If the target operates in healthcare, financial services, or defense contracting and you find evidence that employees have been routinely entering regulated data into consumer-tier AI tools with no data processing agreements in place, the potential liability — from regulatory fines, contractual breach claims, and customer notification obligations — could be significant. Acquirers should obtain a legal opinion on the scope of exposure before proceeding to close.
Absence of any AI governance infrastructure in a company that is otherwise technically sophisticated also warrants scrutiny. A company with a mature SOC 2 program but no AI policy, no tool inventory, and no visibility into AI usage has a governance gap that its own security team has chosen to ignore. That choice reflects something about the organization's compliance culture that extends beyond AI. Similarly, if the target's representations in the purchase agreement include claims about data security practices that are inconsistent with the evidence you gather on shadow AI usage, that discrepancy is both a legal issue and a trust signal about the quality of other representations.
How Acquirers Should Govern AI Post-Close
Due diligence ends at close, but AI governance responsibility does not. Acquirers who identify shadow AI risks during diligence need a concrete integration plan that addresses those risks within the first 90 to 180 days. The goal is not to prohibit AI use — that is both unenforceable and counterproductive — but to bring usage into a governed framework that provides visibility, policy enforcement, and audit capability.
The first step is deploying AI usage monitoring infrastructure across the acquired entity's endpoints and browser environments. This should be done transparently, with employee communication explaining that AI tool usage will be tracked and classified for compliance purposes — not that prompt content will be read or recorded. Platforms like Zelkir operate precisely at this layer: classifying which tools are being used, what category of task they are being used for, and whether that usage falls within approved policy, without capturing the raw content of employee inputs. This approach balances compliance visibility with employee trust.
Second, acquirers should rationalize the acquired company's AI tool landscape against their own approved vendor list, and communicate clearly which tools are permitted under what conditions. Enterprise agreements with major AI vendors should be consolidated where possible to ensure consistent data processing terms and training opt-out protections. Third, AI-specific clauses should be incorporated into the post-close employee handbook and security awareness training program. Employees who have been using AI tools freely for months need a positive explanation of new boundaries, not just a prohibitive policy that lands without context. Finally, AI governance should be added as a standing agenda item in the first post-close compliance committee meetings, with specific milestones for remediation of any material gaps identified during diligence. The M&A process should leave the combined entity in a stronger AI governance position than either company held independently.
Take control of AI usage in your organization — Try Zelkir for FREE today and get full AI visibility in under 15 minutes.
