Why Public AI Models Are a Growing Security Concern
The rapid adoption of publicly available AI tools — ChatGPT, Google Gemini, Claude, Copilot, and dozens of others — has fundamentally changed how employees work. Drafting contracts, summarizing meeting notes, writing code, analyzing spreadsheets: tasks that once required hours now take minutes. For productivity-conscious organizations, the appeal is obvious. For security teams, the implications are far more complicated.
Unlike enterprise software acquired through formal procurement channels, most public AI tools are adopted informally — installed as browser extensions, accessed directly through consumer web interfaces, or embedded in third-party SaaS products employees already use. There is rarely a security review, a data processing agreement, or even basic visibility into what is being sent to these platforms.
The core risk is straightforward: public AI models are trained and operated by third parties, and when an employee submits a prompt, that data leaves the organizational boundary. Depending on the platform's terms of service and data retention policies, that information may be stored, reviewed by human trainers, used to improve the model, or in some cases, surfaced to other users. Enterprises that have not yet grappled with this reality are likely already exposed.
The Data Exposure Problem: What Employees Are Really Sharing
Security professionals tend to think about data loss in terms of traditional exfiltration vectors — email, USB drives, unauthorized cloud storage. AI tools represent a fundamentally different and more subtle risk surface. Employees are not trying to steal data; they are trying to do their jobs faster. That intent does not reduce the exposure.
Consider the types of content that routinely appear in AI prompts: customer records pasted into a summarization request, source code submitted for debugging help, internal financial projections fed into a drafting tool, HR performance reviews cleaned up by a language model, or legal contract language revised by a public chatbot. Each of these scenarios involves confidential or regulated data leaving the enterprise environment with no logging, no audit trail, and often no awareness from the employee that anything sensitive has occurred.
Samsung's widely reported incident in 2023 — where engineers uploaded proprietary source code and internal meeting notes to ChatGPT — was not an act of malice. It was the predictable outcome of deploying powerful AI tools without adequate governance. The same pattern is playing out at organizations of every size and sector. The question is not whether your employees are using public AI models; statistically, they almost certainly are. The question is whether you have any visibility into what they are sharing when they do.
Regulatory and Compliance Implications
Beyond the internal security risk, public AI usage creates measurable compliance exposure. Organizations subject to GDPR, HIPAA, CCPA, SOC 2, PCI-DSS, or industry-specific frameworks face specific obligations around how personal and regulated data is processed and where it flows. Sending customer PII or protected health information to a public AI model without a signed data processing agreement almost certainly violates those obligations — regardless of whether a breach ever occurs.
GDPR Article 28 requires that any third party processing personal data on behalf of a controller operates under a data processing agreement. Most consumer AI platforms do not offer these by default. Organizations using ChatGPT's free or standard tier, for example, are not covered by a DPA with OpenAI unless they have explicitly enrolled in the enterprise tier and executed the appropriate agreement. Many organizations have not done this, yet their employees are submitting personally identifiable information through the consumer interface daily.
For legal and compliance teams, the challenge is compounded by the fact that AI usage is largely invisible through conventional controls. DLP tools can flag known data patterns in email or file transfers, but they typically do not monitor the browser interactions that constitute most AI tool usage. Without dedicated AI governance tooling, compliance officers are effectively flying blind — unable to demonstrate to auditors or regulators that appropriate controls are in place.
Shadow AI: The Governance Gap You Can't Afford to Ignore
Shadow IT has been a persistent enterprise security challenge for over a decade. Shadow AI is its faster-moving, higher-risk successor. Where shadow IT typically involved employees adopting unauthorized file-sharing or collaboration tools, shadow AI involves employees submitting organizational data to externally operated AI systems in real time, at scale, with minimal friction.
The challenge is magnified by the velocity of AI tool proliferation. There are now hundreds of AI-powered applications available as browser extensions alone. Many are embedded in tools employees already trust — productivity suites, writing assistants, CRM platforms, development environments. An employee may not even recognize that they are interacting with an AI backend when they use an autocomplete feature in a document editor or a suggested reply in their email client.
Traditional network-level controls are insufficient against this threat profile. VPN-based filtering and firewall rules can block known AI endpoints, but they do not provide granular visibility into usage patterns, cannot distinguish between legitimate and risky interactions, and are easily bypassed by browser-based tools that operate over standard HTTPS. Organizations need a governance layer that sits closer to the point of interaction — one that can identify AI usage, classify its nature, and generate an auditable record without requiring invasive monitoring of the content itself.
How to Build a Practical AI Security Policy
Blocking all AI tools is neither practical nor advisable — it drives usage further underground and puts your organization at a competitive disadvantage. The goal of a mature AI security policy is not prohibition but governance: ensuring that AI tools are used in ways that do not create unacceptable risk, and that you have the visibility to verify compliance.
Start with discovery. Before writing policy, you need accurate data on which AI tools employees are currently using and in what contexts. This requires tooling that operates at the browser level and can detect AI interactions across a broad range of platforms — not just the handful you might be aware of. Most organizations that run a discovery exercise are surprised by both the volume and the variety of AI tool usage they find.
From there, develop a tiered classification of AI tools: approved tools with appropriate enterprise agreements in place, conditionally approved tools with usage restrictions, and prohibited tools that represent unacceptable risk. Pair this with clear guidance for employees on what types of information should never be submitted to an AI system — including specific data classifications such as PII, PHI, financial projections, M&A-related content, and proprietary source code. Enforce this policy through technical controls that generate audit logs, not just acceptable use statements that employees click through during onboarding. Policy without enforcement is security theater. Finally, conduct regular reviews as the AI tool landscape evolves — what was accurate three months ago is likely already outdated.
Conclusion
The security risks of public AI models in the enterprise are real, material, and actively being realized at organizations that have not implemented appropriate governance. The combination of data exposure, regulatory liability, and near-total lack of visibility creates a risk profile that should command the attention of every CISO and compliance officer — regardless of sector or organization size.
The path forward is not to slow down AI adoption across the board. It is to build the governance infrastructure that allows your organization to benefit from AI tools while maintaining control over what data leaves your environment, which tools are in use, and whether usage aligns with your compliance obligations. That requires visibility first — you cannot govern what you cannot see.
Zelkir was built specifically to address this gap: providing enterprise security and compliance teams with complete visibility into AI tool usage at the browser level, classifying the nature of interactions, and generating audit-ready reports — all without capturing raw prompt content. If your organization is serious about AI governance, the first step is understanding the true scope of your exposure. Try Zelkir for FREE today and get full AI visibility in under 15 minutes.
AI governance starts with visibility — and visibility starts with the right tooling. Try Zelkir for FREE today and get full AI visibility in under 15 minutes.
