Why AI Tool Risk Assessments Are Now a Security Imperative
The proliferation of AI tools in the enterprise has outpaced most organizations' ability to govern them. Employees at every level — from software engineers using GitHub Copilot to marketing teams drafting copy in ChatGPT — are integrating AI assistants into their daily workflows, often without any formal approval or security review. According to a 2024 survey by Cyberhaven, more than 11% of data employees paste into AI tools is sensitive. That number compounds quickly across a workforce of thousands.
The challenge for security and compliance teams is not simply that employees are using AI tools — it is that most organizations have no reliable visibility into which tools are being used, how frequently, or what categories of work those tools are being applied to. This creates a class of risk that sits somewhere between shadow IT and data loss prevention: call it shadow AI. Unlike a rogue SaaS application that IT might catch during an invoice review, AI tool usage happens at the browser level, often through free-tier accounts that never appear in procurement records.
A formal AI tool risk assessment gives your organization a structured way to surface this activity, evaluate the risk it creates, and build governance controls proportionate to those risks. This post walks through a practical, step-by-step framework that IT managers, CISOs, and compliance officers can apply regardless of company size or industry. The goal is not to prohibit AI use — that ship has largely sailed — but to bring it under deliberate, auditable control.
Step 1: Discover What AI Tools Employees Are Actually Using
Any meaningful risk assessment begins with accurate inventory. The assumption that your acceptable use policy or approved software list covers the AI tools employees are actually using is almost certainly wrong. Free browser-based AI tools require no procurement approval, no IT provisioning, and no SSO integration. They are invisible to your CASB, your endpoint agent, and your SaaS management platform — unless those tools have been specifically updated to detect AI traffic.
Start by auditing network traffic logs and DNS query data for known AI tool domains. Build a list that includes not just the obvious ones — OpenAI, Anthropic, Google Gemini, Microsoft Copilot — but also the long tail of vertical AI tools: Harvey for legal, Jasper for marketing, Cursor for developers, Notion AI, Grammarly, and dozens of others. Many of these tools sit inside productivity applications employees already have installed, making them particularly easy to miss.
Complement traffic analysis with a voluntary employee survey asking teams to self-report the AI tools they use and the types of tasks they use them for. Self-reporting will not catch everything, but it surfaces tools that network analysis misses — particularly API integrations that developers have built internally — and it signals to employees that the organization is engaging with AI governance seriously rather than simply blocking things. Cross-reference both data sources to build your working inventory before moving to risk classification.
Step 2: Classify AI Usage by Risk Category
Not every AI tool presents the same level of risk, and treating them uniformly will either leave your organization exposed or frustrate employees with unnecessary restrictions. The most useful classification framework maps two dimensions against each other: the sensitivity of the data likely to be shared with the tool, and the data handling practices of the tool itself.
Define at least three risk tiers. Tier 1 covers tools that are enterprise-licensed, have signed data processing agreements in place, and offer features like zero-data retention or private deployment — Microsoft Copilot with an M365 E3 license, for example, or Claude for Enterprise with appropriate contractual controls. These tools can be approved for general use with appropriate training. Tier 2 covers tools that may be legitimate and broadly useful but lack enterprise data agreements — the free tier of ChatGPT, for instance — and should be flagged for review before use with any sensitive data. Tier 3 covers tools with no enterprise agreements, unclear data retention practices, or a history of security incidents, and should be blocked or restricted pending review.
Apply these tiers not just to the tool itself but to the type of usage. A developer using an unapproved AI code completion tool to refactor a public-facing open-source library is a very different risk profile from a finance analyst pasting revenue projections into the same tool. Governance controls need to account for both the tool and the context of use — which is why usage classification, not just tool classification, is central to a mature AI risk program.
Step 3: Evaluate Data Exposure and Privacy Implications
Data exposure is the most immediate and consequential risk category in any enterprise AI assessment. When employees interact with AI tools, they are frequently sharing information that belongs in tightly controlled environments: customer PII, proprietary source code, internal financial data, legal strategy, HR records, and confidential deal terms. In most cases, employees are not acting maliciously — they are trying to be productive, and the friction between their tools and their tasks is low enough that they do not pause to consider the downstream implications.
For each tool in your inventory, research and document how the vendor handles submitted data. Key questions include: Does the vendor use submitted prompts to train future models? What is the default data retention period? Is there a business associate agreement available for healthcare organizations, or a DPA for GDPR compliance? Does the tool offer an API mode that excludes data from training? Does the vendor undergo third-party security audits, and are SOC 2 Type II reports available? These answers will not always be straightforward to find, and in some cases vendors will provide different answers to direct inquiry than what their public documentation suggests — which is itself a risk signal.
Pay particular attention to tools that operate as browser extensions or integrate directly into productivity suites. These tools often have broader access to content than a standalone chat interface. A browser extension that helps employees write emails may have read access to the entire contents of their inbox, depending on the permissions granted at install. Map the permission scope of each tool against the categories of data it could theoretically access, not just the data employees consciously share with it.
Step 4: Map AI Tool Usage to Compliance Obligations
Your organization's regulatory environment should directly shape how you prioritize AI governance controls. Different frameworks impose different obligations, and a risk assessment that ignores the compliance dimension will produce recommendations that legal and compliance teams cannot support in practice. The most relevant frameworks for most enterprise organizations are GDPR and its state-law equivalents, HIPAA, SOC 2, ISO 27001, and — increasingly — the EU AI Act for organizations operating in European markets.
Under GDPR, the transfer of personal data to a third-party AI tool triggers data processing obligations. If an employee submits a customer support ticket containing a European resident's email address to an AI summarization tool, that constitutes a data transfer that requires either a signed DPA with the vendor or a legitimate legal basis. Most AI tool vendors serving enterprise customers now offer DPAs, but they must actually be executed — the tool being available does not mean the compliance obligation is automatically satisfied.
The EU AI Act introduces a risk-based classification system for AI systems themselves. While most enterprise use of commercial AI tools falls into low-risk or minimal-risk categories, organizations in regulated sectors — finance, healthcare, critical infrastructure — may find that certain AI applications fall under higher-risk classifications requiring specific documentation, human oversight mechanisms, and conformity assessments. Even for organizations not directly subject to the EU AI Act, mapping your AI tool inventory against the Act's risk categories is a useful internal exercise that will inform governance priorities as regulations evolve globally.
Step 5: Implement Governance Controls and Monitoring
A risk assessment without corresponding controls is an audit artifact, not a security program. Once you have inventoried AI tools, classified their risk, evaluated data exposure, and mapped compliance obligations, the next step is implementing governance controls that are proportionate, sustainable, and monitorable over time. The temptation at this stage is to default to blocking — but blanket prohibition of unapproved AI tools without approved alternatives will simply push usage to personal devices or alternative access methods.
Effective governance at the enterprise level requires visibility into AI tool usage at the activity level, not just the network level. Knowing that an employee visited chat.openai.com tells you very little about the risk that visit created. Understanding that the visit involved what appears to be code generation activity — versus customer data processing, or internal strategy discussion — tells you significantly more. This is the distinction between network-level blocking and behavioral monitoring, and it is where modern AI governance platforms add the most value. Tools like Zelkir operate as browser extensions that classify the nature of AI tool usage without capturing raw prompt content, giving compliance teams the usage signal they need without creating a secondary privacy problem by logging sensitive employee inputs.
Layer your controls across three levels: technical controls (approved tool lists enforced at the browser or network level, DLP policies updated to include AI tool destinations, SSO enforcement for enterprise-licensed tools), policy controls (an AI acceptable use policy that is specific rather than generic, clearly defining approved tools, prohibited use cases, and the process for requesting a new tool review), and monitoring controls (ongoing visibility into which tools are being used across the organization and how that usage changes over time, with alerting for new AI tools appearing in your environment). Review your control effectiveness quarterly — the AI tool landscape changes fast enough that a point-in-time assessment will be outdated within months.
Building a Repeatable AI Risk Assessment Program
A single AI risk assessment is better than nothing, but the nature of the AI tool market makes a one-time exercise insufficient. New AI tools launch weekly. Existing tools change their data handling practices, sometimes quietly. Employees cycle in and out of roles, bringing different tool preferences with them. And the regulatory environment continues to evolve, particularly in the EU, where AI Act implementation guidance will shape compliance obligations for years to come. The goal of your first assessment should be to establish the processes, data sources, and ownership structures that allow the assessment to become a continuous program rather than a periodic project.
Assign clear ownership. AI governance sits at the intersection of IT, security, legal, compliance, and increasingly HR — which means it is easy for accountability to diffuse across teams without any one function owning the outcome. Designate an AI governance lead, even if it is a shared responsibility housed within an existing security or compliance function. This person should own the tool inventory, maintain the risk classification framework, manage vendor DPA tracking, and coordinate with department heads when new AI use cases emerge. Many organizations are formalizing this as an AI security officer role, either as a dedicated position or as a formal extension of an existing CISO remit.
Build your AI risk assessment into existing governance rhythms. Tie new AI tool requests to your existing software approval workflow. Add AI tool coverage to your annual vendor risk assessment cycle. Include AI governance metrics — number of approved tools, percentage of AI activity on approved tools, open DPAs, identified exceptions — in your quarterly security reporting to the board. The organizations that manage AI risk most effectively are not the ones with the most restrictive policies — they are the ones with the clearest visibility, the fastest approval processes for legitimate tools, and the governance infrastructure to catch and respond to risks before they become incidents.
The window for establishing proactive AI governance is not unlimited. As AI tool adoption accelerates and as regulators sharpen their expectations, organizations that have not yet built systematic visibility and control will find themselves responding to incidents and audits rather than shaping outcomes. Running a structured AI tool risk assessment now — using the framework outlined here — positions your security and compliance teams as informed partners in your organization's AI strategy rather than obstacles to it.
Take control of AI usage in your organization — Try Zelkir for FREE today and get full AI visibility in under 15 minutes.
