Why Third-Party AI Risk Demands a New Compliance Playbook
Enterprise risk management teams spent decades refining their vendor assessment processes for cloud software, payroll providers, and data processors. Then generative AI arrived, and most of those frameworks became insufficient overnight. The problem is not simply that AI tools are new — it is that they interact with sensitive information in fundamentally different ways than traditional SaaS applications. When an employee pastes a contract clause into ChatGPT or runs financial projections through an AI assistant, they are not just accessing a third-party service. They may be contributing proprietary data to a model training pipeline, exposing regulated information to foreign servers, or creating an audit trail gap that your compliance team cannot close.
Third-party AI risk management is now a board-level concern. According to Gartner, by 2026 more than 80% of enterprises will have deployed some form of generative AI API or application in production — up from less than 5% in 2023. Yet the majority of organizations still lack a formal vendor assessment process tailored specifically to AI tools. Standard vendor questionnaires ask about SOC 2 compliance, incident response plans, and data residency. They rarely ask whether employee inputs are used to fine-tune foundation models, whether inference logs are retained, or how the vendor defines 'aggregate' data in their terms of service.
This guide is designed for CISOs, compliance officers, and IT security leaders who need to build or harden their third-party AI risk programs. It covers how to map your existing AI vendor footprint, what to assess during vendor onboarding, how to address data handling risks specific to AI, and how to establish continuous monitoring so that governance does not stop at contract signature.
Mapping Your AI Vendor Landscape Before You Can Govern It
Effective risk management begins with visibility, and most organizations dramatically underestimate how many AI tools are already in active use. Shadow AI — the use of AI applications that IT has not formally approved or even discovered — is pervasive. Employees adopt browser-based AI writing assistants, coding copilots, meeting summarizers, and research tools without submitting procurement requests. In a typical mid-market company with 500 to 2,000 employees, security teams routinely discover 30 to 60 distinct AI tools in use once they begin actively monitoring browser and network activity.
Before you can assess third-party AI vendors, you need a complete and current inventory. This means moving beyond what procurement has approved and discovering what employees are actually using. Browser-level telemetry is one of the most effective methods for this discovery phase. Platforms like Zelkir can identify AI tool usage patterns across the workforce without capturing raw prompt content — giving IT teams the visibility they need to build a vendor inventory without creating a surveillance concern. The output of this phase should be a tiered catalog: tools with enterprise contracts, tools with individual subscriptions, and unsanctioned tools being used with personal accounts.
Prioritize your assessment effort based on data sensitivity and usage volume. A marketing team using an AI image generator poses a different risk profile than a finance team routing budget models through an AI assistant. Document not just which tools are in use, but which departments use them, what types of tasks they support, and what categories of data are likely being processed. This usage context becomes the foundation for every vendor assessment that follows.
The Core Vendor Assessment Framework for AI Tools
Standard vendor security questionnaires need significant augmentation to be effective for AI tools. Your framework should address at least five distinct risk domains: data governance, model transparency, access controls, compliance certifications, and incident response. Each domain requires AI-specific questions that go beyond what generic security assessment templates provide.
On data governance, the critical questions are: Does the vendor use customer inputs to train or fine-tune models? How long are prompts and outputs retained, and in what form? Can customers opt out of data retention entirely? What happens to data when a contract is terminated? Vendors often bury the answers to these questions in privacy addenda or model cards rather than their core data processing agreements. Legal teams should review these documents in full, not just the DPA summary.
Model transparency covers questions many compliance teams overlook. Which foundation model or models power the vendor's product? Are those models hosted by a sub-processor, and if so, which one? Has the model been evaluated for bias or harmful output generation, and is documentation available? For regulated industries — financial services, healthcare, legal — the inability to explain how an AI model reached a particular output can create significant liability. Vendors who cannot provide model cards or who refuse to disclose their sub-processor chain should be flagged for elevated scrutiny. Access control questions should address whether the vendor supports single sign-on, SCIM provisioning, role-based permissions, and audit log export — the same table stakes you require from any enterprise software vendor, now applied rigorously to AI tools.
Data Handling, Model Training, and the Confidentiality Problem
The single most consequential risk in third-party AI relationships is the possibility that confidential information submitted to an AI tool becomes part of that vendor's training data. This risk is not hypothetical. OpenAI's ChatGPT experienced a high-profile incident in early 2023 when a bug briefly exposed user chat history to other users. Samsung engineers inadvertently leaked proprietary semiconductor specifications when they used ChatGPT for code review. These incidents underscore that the data handling practices of AI vendors deserve the same scrutiny you would apply to a cloud database provider.
Most enterprise-tier AI contracts now include provisions that prevent user data from being used for model training. But the legal language matters enormously. Phrases like 'we do not train on your data' sometimes apply only to supervised fine-tuning, not to reinforcement learning from human feedback collected through thumbs-up and thumbs-down ratings. Other contracts carve out 'aggregate and anonymized' data from training restrictions — language that is often vague enough to encompass patterns derived from your employees' actual inputs. Compliance teams should require vendors to define their training exclusions with precision and validate that API usage agreements differ from consumer terms of service, since employees using personal accounts may be subject to consumer-tier data handling policies even when the company has an enterprise contract.
For organizations subject to HIPAA, GDPR, CCPA, or sector-specific regulations, the data handling analysis must also address where processing occurs. Several leading AI vendors run inference workloads on infrastructure located in jurisdictions that may not satisfy data residency requirements. Ask vendors to specify the countries where inference happens, not just where data is stored at rest. For European operations, confirm whether the vendor has signed Standard Contractual Clauses and whether they can demonstrate compliance with Schrems II transfer impact assessments.
Contractual and Regulatory Obligations You Cannot Afford to Miss
Third-party AI risk is increasingly intersecting with regulatory obligations that carry real enforcement risk. The EU AI Act, which began phasing in during 2024, imposes due diligence requirements on deployers of high-risk AI systems. Financial services firms operating under the SEC's cybersecurity disclosure rules or the EU's DORA framework must be able to demonstrate that critical third-party technology relationships — including AI tools — are subject to documented oversight. In healthcare, OCR guidance has clarified that business associate agreements are required when AI vendors may process protected health information, even incidentally.
Your AI vendor contracts should include several provisions that are not yet standard in the market. First, a right to audit or right to assessment clause that gives your organization the ability to request evidence of compliance controls on a periodic basis, or in response to a security incident. Second, a breach notification obligation with a defined time window — ideally 48 to 72 hours — that covers both data breaches and incidents where model outputs may have exposed or generated sensitive information. Third, a sub-processor disclosure requirement that obligates the vendor to notify you before adding new sub-processors, particularly foundation model providers. Many enterprise buyers do not realize that the AI assistant they purchased is built on a foundation model operated by a separate entity with its own data handling terms.
Involve legal counsel early in the AI vendor contracting process. The standard agreements offered by AI vendors are written to favor the vendor, and negotiation is often possible for enterprise-tier customers. Document all representations made during sales conversations, since vendor claims about data privacy and model training practices made verbally or in marketing materials may not be reflected in the actual contract language.
Building Ongoing Monitoring Into Your AI Governance Program
Vendor assessment at onboarding is necessary but not sufficient. AI tools and their underlying models evolve rapidly. A vendor that had acceptable data handling practices when you signed your contract may have updated its privacy policy, switched foundation model providers, or introduced new features that change the risk profile of the product entirely. The lifecycle of AI vendor risk management requires continuous monitoring, not a one-time gate.
Operationally, this means establishing a cadence for reassessment. High-risk AI tools — those processing regulated data, used by finance or legal teams, or integrated with core business systems — should be reviewed quarterly. Lower-risk productivity tools might be reviewed annually. Subscribe to vendor security bulletins and monitor their terms of service change history. Services like ToS;DR and Trackchanges.io can automate this monitoring for consumer-tier tools that your employees may be accessing without enterprise agreements.
Internally, you need visibility into whether employees are using AI tools in ways that deviate from approved usage patterns. This is where usage governance platforms become operationally important. Zelkir, for example, classifies AI tool usage by behavior category — distinguishing between document drafting, code generation, data analysis, and other use types — without logging the actual content of employee prompts. This gives compliance teams a behavioral audit trail that can surface anomalies, such as a spike in AI usage from the M&A team during a sensitive deal period, without creating a legal or ethical issue around monitoring employee communications. The goal is not surveillance — it is informed governance.
Turning Vendor Risk Into a Manageable, Auditable Process
Third-party AI risk management does not need to be a compliance project that consumes an entire team. With the right framework, tooling, and stakeholder alignment, it becomes a repeatable and auditable process that fits within your existing vendor risk management program. The key is to treat AI tools as a distinct asset class within your vendor inventory — one that requires its own assessment criteria, contractual provisions, and monitoring approach — rather than trying to force them into frameworks designed for traditional software.
Start by establishing a cross-functional AI governance committee that includes representation from IT security, legal, compliance, HR, and at least one business unit leader. This committee should own the AI tool approval process, maintain the vendor inventory, and review escalated risk findings. Define clear tiers of approval: tools that can be self-approved by department heads within a defined scope, tools that require IT security sign-off, and tools that require legal review before any deployment. Document these criteria and make them accessible to employees so that shadow AI adoption is reduced through clarity, not just through restriction.
Finally, produce internal audit artifacts that demonstrate program maturity. This includes a maintained AI vendor register with assessment dates and risk ratings, copies of executed DPAs and AI-specific contractual provisions, evidence of ongoing monitoring activities, and records of any incidents or policy exceptions. Regulators and auditors are increasingly asking to see evidence of AI governance programs as part of broader technology risk assessments. Organizations that can demonstrate a structured, documented, and continuously monitored third-party AI risk program will be far better positioned than those treating AI vendor oversight as an afterthought. The investment in building this capability now is significantly smaller than the cost of responding to a data exposure, a regulatory inquiry, or a contract dispute that could have been prevented by a more rigorous assessment process.
Take control of AI usage in your organization — Try Zelkir for FREE today and get full AI visibility in under 15 minutes.
