Why AI Governance Becomes a Liability During M&A
Mergers and acquisitions have always stress-tested IT and security organizations. New networks, unfamiliar systems, and mismatched policies create windows of exposure that threat actors actively exploit. But in 2024 and beyond, there is a new category of risk that most integration playbooks have not caught up with: ungoverned AI tool usage across the combined entity.
When two organizations merge, they rarely bring identical AI policies — or any formal AI policies at all. The acquiring company may have invested in a structured approach to governing tools like ChatGPT, Copilot, or Gemini. The acquired company may have allowed employees to use AI tools freely, with no visibility into what data was shared, which tools were approved, or how usage varied across departments. The moment those workforces combine under a single compliance obligation, the acquirer inherits every governance gap the target company had accumulated.
This is not a hypothetical concern. Legal teams conducting contract reviews with AI-assisted tools, engineers using code-generation assistants to work on proprietary systems, and HR teams feeding sensitive employee data into summarization tools — these behaviors exist at virtually every company today. Without a structured governance framework in place before and immediately after the merger closes, the combined organization faces regulatory exposure, data leakage risk, and the reputational consequences that follow.
The AI Tool Sprawl Problem in Acquired Companies
Shadow AI — the proliferation of AI tools adopted by employees without formal IT or security approval — is widespread even in well-governed organizations. In acquisition targets, the problem is often dramatically worse. Smaller companies and growth-stage businesses frequently lack the IT governance infrastructure to track SaaS adoption at all, let alone the specialized usage patterns of AI assistants. By the time a deal closes, the acquiring company may be integrating a workforce where dozens of distinct AI tools are in active use, none of them vetted.
The tool categories typically involved span a wide surface area. Browser-based AI assistants like ChatGPT and Claude are ubiquitous across knowledge worker roles. Coding assistants such as GitHub Copilot and Cursor are standard in engineering organizations. AI-enhanced productivity suites — including Microsoft 365 Copilot and Google Workspace's Duet AI features — are embedded in email, document editing, and spreadsheet workflows. Each of these represents a potential channel through which sensitive intellectual property, customer data, or strategic information could have been processed by a third-party model.
The sprawl problem is compounded by the fact that most organizations have no baseline inventory of AI tool usage. Without instrumentation at the browser or endpoint level, IT teams are largely guessing. Surveys and self-reporting are unreliable, particularly for tools employees use habitually and do not perceive as risk vectors. Establishing that baseline — rapidly, accurately, and without disrupting day-to-day operations — is the foundational challenge for any M&A security integration team dealing with AI governance.
Due Diligence: Auditing AI Usage Before the Deal Closes
Traditional technology due diligence focuses on infrastructure, software licenses, data architecture, and known security vulnerabilities. AI governance should now be a named workstream in that process. The goal during pre-close diligence is to develop a risk-adjusted picture of how AI tools are being used, what data categories are likely involved, and whether any usage patterns represent immediate compliance or contractual liabilities.
Practically, this means requesting documentation of any AI-specific policies the target company has adopted — acceptable use policies, vendor approval lists, data classification rules that apply to AI tools, and evidence of employee training. In most cases, that documentation will be sparse or nonexistent, which itself is a data point that should inform integration planning and potentially deal terms. Legal counsel should also examine whether any contracts with customers or partners contain clauses restricting the use of AI on covered data, a category of contractual obligation that is increasingly common in regulated industries.
Where technical access is available pre-close — such as in friendly acquisitions where the target is cooperative with diligence requests — deploying lightweight discovery tooling to characterize AI tool usage across the workforce can dramatically accelerate integration planning. Understanding which tools are in use, which departments rely on them most heavily, and which usage categories present the highest risk allows the acquiring company to prioritize its post-close governance actions rather than operating blind in the first critical weeks after the deal closes.
Post-Merger Integration: Building a Unified AI Governance Framework
The integration phase is where governance frameworks either take root or collapse under operational pressure. The merged organization needs a unified AI governance posture, but achieving that quickly without alienating newly integrated employees or disrupting productivity requires deliberate sequencing. The instinct to immediately impose the acquirer's policies wholesale often backfires, particularly when the acquired company has a different culture around tool adoption and autonomy.
A more effective approach begins with visibility before enforcement. Deploy monitoring capabilities across the combined workforce to establish an accurate inventory of AI tool usage within the first thirty to sixty days post-close. This baseline serves two purposes: it gives the governance team the data needed to make informed policy decisions, and it provides a benchmark against which to measure the impact of any policy changes. Enforcement without visibility creates conflict without clarity.
Once the baseline is established, governance teams should tier their response by risk. Usage of enterprise-licensed AI tools with appropriate data processing agreements — such as Microsoft 365 Copilot with proper tenant configuration — represents a different risk profile than usage of free consumer AI products where data may be used for model training. The governance framework should formalize this tiering, creating an approved tool list, a process for requesting approval of new tools, and clear guidance on which data categories are permissible to use with which tools. This is also the appropriate moment to align on a single policy framework that will govern the combined entity going forward, incorporating the best elements of each organization's prior approach where applicable.
Regulatory and Compliance Risks That Surface During Integration
AI governance failures during M&A integration do not stay contained to IT. They surface in regulatory examinations, customer audits, and legal proceedings — often at the worst possible time, when the organization is already stretched by integration demands. Understanding the specific regulatory risks that AI tool usage creates is essential for legal counsel and compliance officers participating in integration planning.
For organizations subject to GDPR or CCPA, the use of AI tools that process personal data belonging to EU or California residents triggers specific obligations. If employees at the acquired company were using non-enterprise AI tools to process customer data — a common finding — the combined entity may have inherited data processing activities that were never disclosed in a privacy notice and never covered by a data processing agreement with the AI vendor. This is a reportable gap in many regulatory contexts, and addressing it proactively is significantly less costly than having it discovered during an examination.
Industry-specific regulations add further complexity. Financial services firms integrating an acquired company must ensure that AI-assisted communications and decision support tools meet FINRA and SEC recordkeeping standards. Healthcare organizations face HIPAA implications if protected health information was processed through unapproved AI tools. Organizations operating in the European Union will increasingly contend with the EU AI Act's requirements around high-risk AI system use and transparency. Mapping these regulatory overlaps to the actual AI tool usage discovered during integration is a critical deliverable for the legal and compliance workstream.
Practical Steps for IT and Security Teams Managing the Transition
For IT and security teams on the ground during integration, the challenge is translating governance strategy into operational reality under significant time pressure. Several concrete steps can meaningfully reduce risk during the transition period without requiring months of preparation.
First, instrument before you announce. If you intend to implement AI governance monitoring across the acquired workforce, deploy observability tooling before communicating new policies. This gives you an accurate picture of current behavior rather than a distorted baseline reflecting employees' awareness that they are being watched. Tools like Zelkir's browser extension can be deployed enterprise-wide to classify AI tool usage by category without capturing raw prompt content — addressing both the governance need and the employee privacy concern that often complicates monitoring conversations.
Second, establish an AI governance steering committee that includes representatives from IT, security, legal, compliance, and business leadership from both organizations. AI governance decisions touch all of these functions, and unilateral action from any single team tends to produce policies that work technically but fail operationally. Third, communicate the governance framework to employees in terms of protection, not restriction. Employees who understand that AI governance policies exist to protect the company's IP and customer data — and by extension their own work — are meaningfully more likely to comply than those who perceive the policies as arbitrary limitations. Fourth, build a rapid review process for AI tool approval requests. The worst outcome is a governance posture so restrictive that employees route around it entirely. A lightweight approval workflow — with clear criteria and a target turnaround time of a few business days — maintains control while preserving the productivity benefits that AI tools genuinely provide.
Conclusion: Making AI Governance a First-Class M&A Workstream
The M&A playbooks that most organizations are running today were written before generative AI became a standard fixture of knowledge work. They account for network integration, identity federation, data migration, and application rationalization — but they treat AI tools the way previous generations of playbooks treated shadow IT generally: as a cleanup task rather than a first-class risk workstream. That approach is no longer adequate.
The organizations that navigate AI governance in M&A most effectively treat it the same way they treat cybersecurity integration: with dedicated resources, clear ownership, structured timelines, and executive sponsorship. They conduct AI-specific due diligence before the deal closes, deploy observability tooling immediately post-close to establish a behavioral baseline, and build unified governance frameworks that are risk-tiered, operationally realistic, and communicated clearly to employees across the combined workforce.
The cost of getting this right is modest relative to the cost of getting it wrong. A single incident in which sensitive IP from an acquired company is traced to an unmonitored AI tool usage pattern — whether discovered internally, by a regulator, or by a customer — can dwarf the governance investment many times over. For CISOs, compliance officers, and IT leaders participating in the next integration, AI governance deserves a seat at the table from day one of diligence through the final stages of workforce consolidation.
Take control of AI usage in your organization — Try Zelkir for FREE today and get full AI visibility in under 15 minutes.
