Why AI Tools Create a New GDPR Compliance Problem

When GDPR came into force in May 2018, generative AI was barely a concept outside research labs. Today, employees across European businesses routinely use tools like ChatGPT, Microsoft Copilot, Google Gemini, and dozens of specialized AI assistants to draft documents, analyze data, summarize meetings, and process customer information — often without any formal authorization from their IT or compliance teams. This gap between regulatory intent and operational reality has created one of the most pressing GDPR compliance challenges organizations have faced since the regulation was introduced.

The core problem is straightforward but difficult to manage at scale: when an employee pastes a customer's name, email address, contract details, or health record into an AI prompt, that personal data is transmitted to a third-party system. Under GDPR, that transmission is a data processing activity. It requires a lawful basis, a data processing agreement where applicable, and in many cases a legitimate interest assessment or explicit consent from the data subject. None of that typically happens when an employee opens a browser tab and starts typing into ChatGPT.

For CISOs and compliance officers, the challenge is not just about individual incidents. It is about the systematic, daily, largely invisible flow of personal data into AI systems that have not been vetted, contracted, or approved by the organization. Understanding the full scope of this exposure is the first step toward addressing it.

The Data Controller Dilemma: Who Is Responsible for AI Inputs?

Under GDPR, the organization that determines the purposes and means of processing personal data is the data controller. That means your company — not OpenAI, not Google, not Anthropic — is legally responsible for what your employees submit to AI tools. The fact that an employee acted independently, without explicit instruction from management, does not transfer liability to the individual. The organization remains accountable for ensuring that all processing of personal data under its operational umbrella complies with GDPR requirements.

This creates a significant liability exposure. Consider a scenario where a human resources manager at a Frankfurt-based company uses Claude to summarize performance reviews for 50 employees, pasting in names, salaries, disciplinary histories, and health accommodations. That manager may believe they are simply using a productivity tool. In GDPR terms, however, the company has just processed a substantial volume of sensitive personal data — including special category data under Article 9 — through a third-party AI provider without a Data Processing Agreement in place, without a lawful basis for that specific processing activity, and without informing the data subjects.

Establishing clarity on the controller-processor relationship with AI vendors is essential but not always straightforward. Some vendors, particularly consumer-oriented AI products, explicitly state in their terms of service that they use input data to train future models. Others offer enterprise tiers with contractual commitments against data retention and training. Compliance teams must audit which AI tools employees are using and determine whether each vendor qualifies as a data processor under Article 28 — which requires a written contract specifying the scope, nature, and purposes of processing.

Key GDPR Articles That Apply Directly to AI Tool Usage

Several GDPR provisions are directly implicated whenever employees use AI tools to process personal data. Article 5 establishes the core data protection principles: personal data must be processed lawfully, fairly, and transparently; collected for specified, explicit, and legitimate purposes; limited to what is necessary for those purposes; kept accurate; retained only as long as necessary; and processed with appropriate security. Submitting personal data to an unapproved AI tool potentially violates several of these principles simultaneously, particularly purpose limitation, data minimisation, and storage limitation.

Article 28 governs the relationship between data controllers and data processors. If an AI tool provider processes personal data on behalf of your organization, they must operate under a binding contract that specifies the subject matter, duration, nature, and purpose of processing, as well as the type of personal data involved. Many employees using consumer AI tools have no such agreement in place, and their organization may not even be aware that the vendor relationship exists. This is not a technicality — Data Protection Authorities (DPAs) across Europe have issued substantial fines for Article 28 violations.

For organizations transferring data to AI providers headquartered outside the European Economic Area — which includes most major US-based AI companies — Article 46 requires appropriate safeguards for international data transfers. Standard Contractual Clauses (SCCs) are the most common mechanism, but their validity depends on a Transfer Impact Assessment confirming that the destination country provides essentially equivalent protection to GDPR. The invalidation of Privacy Shield in 2020 and ongoing regulatory scrutiny of US-based cloud services means this is an area of active enforcement risk, not merely theoretical concern.

The Hidden Risk: Shadow AI and Unmonitored Employees

Shadow IT has always been a compliance and security challenge, but the proliferation of browser-based AI tools has made the problem orders of magnitude harder to manage. Unlike traditional shadow IT — an employee installing unauthorized software on a company laptop — shadow AI requires nothing more than a browser and a free account. An employee can be using five different AI tools simultaneously without any trace visible to IT or security teams through conventional monitoring approaches.

The scale of shadow AI adoption in European enterprises is significant. Industry research consistently finds that a substantial majority of employees who use AI tools at work do so without explicit employer approval, and a large proportion of those employees admit to entering customer data, internal financials, or proprietary business information into those tools. In regulated industries — financial services, healthcare, legal, pharmaceuticals — this is not just a GDPR problem. It intersects with sector-specific regulations including MiFID II, DORA, the NIS2 Directive, and national data protection laws that layer additional obligations on top of GDPR.

The compliance risk is compounded by the fact that most organizations lack the visibility to even know where the problem exists. Without knowing which AI tools employees are using, which departments are most active, and what categories of data are being processed, compliance teams cannot conduct accurate Data Protection Impact Assessments, cannot respond accurately to regulatory inquiries, and cannot maintain a complete Record of Processing Activities as required under Article 30. The first step to managing shadow AI is making it visible — without surveilling employees in ways that create separate privacy or labor law concerns.

Building a GDPR-Compliant AI Governance Framework

An effective AI governance framework for GDPR compliance has four interconnected components: policy, visibility, controls, and documentation. Policy means establishing clear, written rules about which AI tools employees may use, under what conditions, and with what categories of data. This includes an approved AI tool list, explicit prohibitions on entering personal data into unapproved tools, and guidance on what constitutes personal data in practical work contexts. Policies that are too vague or too restrictive without supporting infrastructure tend to be ignored.

Visibility means having operational awareness of actual AI tool usage across the organization. This is where purpose-built AI governance platforms become critical. Tools like Zelkir monitor which AI tools employees are accessing through their browsers and classify the nature of that usage — without capturing the actual content of prompts, which would itself raise privacy concerns under GDPR and potentially conflict with European labor law protections around employee monitoring. This approach gives compliance teams the oversight they need while respecting employee privacy and national works council requirements that exist in countries like Germany, France, and the Netherlands.

Controls mean the ability to act on what visibility reveals — blocking access to unapproved tools, requiring employees to acknowledge acceptable use policies before accessing approved tools, and generating alerts when usage patterns suggest personal data may be at risk. Documentation means maintaining records that demonstrate compliance: which tools are approved, which DPAs are in place, which Transfer Impact Assessments have been conducted, and how employee usage aligns with the organization's Record of Processing Activities. Together, these four components give compliance teams both the operational capability and the evidentiary foundation they need to demonstrate GDPR compliance to supervisory authorities.

What Regulators Are Watching in 2025 and Beyond

European Data Protection Authorities have made AI a clear enforcement priority. The Italian DPA's temporary ban on ChatGPT in 2023 was a signal that regulators are prepared to act against AI providers whose data practices do not meet GDPR standards — and that signal extends to organizations that use those tools without appropriate safeguards. The Irish DPC, which supervises many US tech companies operating in the EU due to their European headquarters in Ireland, has opened multiple investigations into AI systems and data practices. France's CNIL has issued detailed guidance on lawful bases for AI-related processing. Germany's conference of DPAs has addressed employee monitoring in the context of AI governance tools.

The EU AI Act, which began phasing into application in 2024, adds another regulatory layer. While the AI Act primarily regulates AI systems by risk category rather than AI usage by employees, it intersects with GDPR in important ways — particularly for organizations deploying AI systems in high-risk categories such as employment decisions, credit scoring, or healthcare. Compliance teams need to understand both frameworks and how they interact, rather than treating them as separate workstreams.

Looking ahead, enforcement is likely to become more targeted and more consequential. DPAs are developing greater technical sophistication in understanding how AI systems process data, and the European Data Protection Board has issued guidance that makes clear expectations for both AI developers and deploying organizations. Organizations that cannot demonstrate meaningful governance of their employees' AI tool usage — including documented policies, monitoring mechanisms, and vendor due diligence — will face increasing scrutiny as regulators move from general guidance to specific enforcement actions.

Start with an AI tool inventory. Before you can govern AI usage, you need to know what is actually happening in your organization. This means deploying a monitoring solution that gives you visibility into which AI tools employees are accessing — across all departments, not just technical teams. The results typically surprise compliance officers: the number of distinct AI tools in use across a mid-sized enterprise commonly runs into the dozens, the majority of which have never been reviewed by legal or IT security.

Conduct a gap analysis against your existing Data Processing Agreements and Records of Processing Activities. For each AI tool your employees are using, determine whether a DPA exists, whether the vendor's data retention and training practices are compatible with your GDPR obligations, and whether international transfer mechanisms are in place if the vendor operates outside the EEA. Prioritize the highest-risk tools — those most likely to receive personal or special category data based on the roles of employees using them — and work outward from there.

Implement a tiered AI tool approval process that balances compliance rigor with operational practicality. A blanket ban on AI tools will not work — employees will route around it, creating greater shadow AI risk than a structured program. Instead, create a clear pathway for teams to request approval of AI tools, backed by a documented review process that evaluates GDPR compliance, security posture, and business necessity. Communicate approved tools clearly, provide guidance on acceptable use, and use your monitoring capability to reinforce policy rather than replace it. The goal is a compliance program that employees understand and can actually follow — supported by governance infrastructure that gives your compliance team the visibility and documentation to demonstrate that program to any supervisory authority that asks.

Take control of AI usage in your organization — Try Zelkir for FREE today and get full AI visibility in under 15 minutes.

Further Reading