Why AI Tools Are Disrupting HIPAA Compliance Programs
Healthcare organizations have spent decades building compliance programs around a relatively predictable threat model: defined systems, known vendors, signed agreements, and auditable data flows. HIPAA's administrative safeguards were designed for an era when 'using a new tool' meant submitting a procurement request, not opening a browser tab. That model is under serious pressure. In 2024 and beyond, employees at health systems, insurance companies, revenue cycle management firms, and healthcare SaaS vendors are using AI tools — ChatGPT, Claude, Gemini, Copilot, and dozens of others — as casually as they use Google Search. The compliance implications are significant and largely unresolved.
The core problem is structural. HIPAA's Privacy and Security Rules require covered entities and their business associates to maintain control over how protected health information, or PHI, is accessed, transmitted, and stored. But consumer-grade and enterprise AI tools introduce a new category of risk: employees submitting queries that contain or imply PHI to third-party systems that may not have signed a Business Associate Agreement, may not maintain HIPAA-compliant infrastructure, and may retain prompt data for model training purposes. Compliance officers who have never heard of a particular AI tool may find it running in their environment simply because a clinical administrator found it useful for drafting prior authorization letters.
This isn't a hypothetical risk. The HHS Office for Civil Rights has made clear that the BAA requirement extends to any vendor that creates, receives, maintains, or transmits PHI on behalf of a covered entity. If an employee submits a patient summary to an AI tool to generate a care plan template, and that tool's vendor hasn't signed a BAA, the organization may be in violation — regardless of whether any harm resulted. Understanding this exposure requires first understanding what HIPAA actually demands in the context of AI.
What HIPAA Actually Requires When Employees Use AI
HIPAA's Business Associate Agreement requirement is triggered when a vendor or service provider performs functions or activities that involve the use or disclosure of PHI on behalf of a covered entity. This is a functional test, not a technical one. It doesn't matter whether the tool was designed for healthcare. What matters is whether PHI ends up in the system and whether the vendor is acting on the covered entity's behalf in processing it. When a billing coordinator pastes a patient's name, date of birth, and diagnosis code into an AI assistant to draft an appeal letter, the AI vendor almost certainly meets the definition of a business associate under 45 CFR § 160.103.
The BAA requirement carries specific obligations. The agreement must establish permitted uses and disclosures of PHI, require the business associate to implement appropriate safeguards, mandate breach notification, and ensure the return or destruction of PHI at contract termination. Most general-purpose AI vendors — even enterprise tiers — do not offer BAAs by default. Some, including Microsoft for its Azure OpenAI Service and certain configurations of Google's Workspace AI features, do offer BAA coverage under specific contractual arrangements. But those arrangements require deliberate procurement, configuration, and documentation. A free or individual-tier subscription to any major AI platform almost certainly does not come with BAA coverage.
Beyond the BAA requirement, HIPAA's Security Rule requires covered entities to conduct risk analyses that account for all electronic systems that handle ePHI. If AI tools are in use and not inventoried, they cannot be included in risk analyses, cannot be covered by security policies, and cannot be properly evaluated for technical safeguards like encryption, access controls, and audit logging. This is the compliance gap that most healthcare organizations currently have — not because they're negligent, but because the tools proliferated faster than governance frameworks could adapt.
The BAA Gap: When Vendors Won't Sign or Can't Comply
One of the more uncomfortable realities of AI governance in healthcare is that many of the most capable and widely used AI tools simply will not execute a BAA. OpenAI's standard consumer and API terms explicitly state that users should not submit PHI unless they have a specific enterprise agreement that includes a BAA. Anthropic, the maker of Claude, similarly limits PHI use to customers with specific commercial agreements. Perplexity, Midjourney, and most AI-powered browser extensions offer no BAA pathway at all. For compliance teams, this means a large and growing category of tools must be treated as categorically off-limits for any workflow that could touch PHI.
The challenge is enforcement. Unlike a corporate email system or an EHR, browser-based AI tools don't require IT provisioning. Any employee with internet access can create a free account and begin using a tool within minutes. Without visibility into which tools are actually being used across the organization, compliance teams are essentially governing by policy alone — issuing acceptable use guidelines and hoping employees follow them. That approach is insufficient under HIPAA's administrative safeguard requirements, which demand that covered entities implement procedures to guard against unauthorized access to PHI, not merely discourage it.
There is also a middle category worth noting: AI tools that are marketed to healthcare organizations and claim HIPAA compliance but whose BAA coverage is narrowly scoped. An AI documentation tool might sign a BAA covering its core transcription function while explicitly excluding any data submitted through an open chat interface. Compliance officers need to read BAA scopes carefully and validate that the specific use cases their employees are engaging in fall within covered activities. Vendor-provided BAAs are not always drafted to match how employees actually use the product.
How AI Governance Platforms Close the Compliance Loop
Effective HIPAA compliance in an AI-enabled environment requires more than policy documents and vendor agreements. It requires operational visibility — knowing which AI tools employees are using, how frequently, and in what context. This is the problem that AI governance platforms are designed to solve. Rather than relying on self-reporting or periodic audits, a governance platform like Zelkir provides continuous, automated monitoring of AI tool usage across the organization without capturing or storing the raw content of employee prompts.
This distinction matters enormously in a healthcare context. A governance tool that captures prompt content would itself become a potential repository of PHI, creating a new compliance liability rather than resolving one. Zelkir's approach classifies the nature and category of AI usage — identifying which tools are in use, whether they involve sensitive workflow types, and whether they fall outside approved vendor lists — without recording what was actually typed. This gives compliance teams the audit trail they need to demonstrate oversight and respond to incidents without creating new exposure.
From a HIPAA administrative safeguards perspective, this kind of visibility supports several required implementation specifications. It enables covered entities to identify unauthorized software and systems, document workforce activity for audit purposes, and respond to potential breaches with concrete evidence rather than reconstructed timelines. When HHS OCR investigates a breach or conducts an audit, one of the first things they examine is whether the organization had reasonable administrative controls in place. A documented AI governance program, with usage logs and policy enforcement records, is a meaningful demonstration of due diligence.
Building an AI Acceptable Use Policy for Healthcare Organizations
A HIPAA-compliant AI acceptable use policy needs to go beyond a generic prohibition on 'sharing patient data with unauthorized tools.' It should provide clear, role-specific guidance that reflects how employees actually work. Clinical staff have different AI use patterns than billing teams, and both differ from IT administrators or legal counsel. A well-constructed policy defines which AI tools are approved, specifies the workflows in which they may be used, and draws explicit lines around PHI-adjacent tasks.
The policy should include a tiered approval framework. Tier one covers fully approved tools with signed BAAs, documented security reviews, and IT-managed deployment — these are permissible for workflows involving PHI. Tier two covers approved tools without BAA coverage — permissible for administrative or operational tasks that do not involve patient data. Tier three covers all other tools — restricted, requiring IT review before use. This framework gives employees a practical decision tree rather than a blanket prohibition that gets ignored in practice.
Training is equally important. Many HIPAA violations involving AI tools are not malicious — they result from employees who genuinely didn't understand that pasting a patient summary into a chatbot constitutes a potential disclosure. Annual HIPAA training should be updated to include specific examples of AI-related risks, including the scenario of using a non-approved AI assistant to draft clinical correspondence or summarize medical records. Workforce training records should document that AI-specific content was covered, creating an additional layer of evidence for any future OCR investigation.
What to Do When an Employee Uses an Unauthorized AI Tool
Despite strong policies and training, unauthorized AI use will occur. The question is not whether to prepare for it, but how. Healthcare organizations should establish a clear incident response protocol specifically for AI-related policy violations, distinct from but integrated with the broader HIPAA breach response framework. The first step is assessment: does the incident constitute a breach under HIPAA's definition, meaning was PHI actually disclosed to an unauthorized party? This requires determining what data was submitted, to which tool, under what terms of service, and whether there is any evidence of retention or further processing.
If PHI was submitted to a tool without a BAA and there is no evidence of a low probability of compromise — the standard required to invoke the breach safe harbor — the organization may face notification obligations under 45 CFR § 164.400. This includes notifying affected individuals, HHS, and in some cases the media. Acting quickly and documenting the response process is essential. Organizations that can demonstrate they detected the incident promptly through automated monitoring, assessed it systematically, and responded in accordance with their policies are in a meaningfully better position than those discovering the incident months later through other means.
Remediation should include a review of how the unauthorized use occurred — whether it reflects a gap in training, an ambiguous policy, inadequate access controls, or a failure of monitoring. The goal is not punitive action against the individual employee, but a systemic improvement that reduces the likelihood of recurrence. Governance data showing patterns of unauthorized AI use across teams or departments can surface structural issues — such as a specific workflow that employees are trying to automate without sanctioned tools — that can be addressed proactively rather than reactively.
Conclusion: Governance Is the New Compliance Infrastructure
HIPAA was written when the boundary between an organization's systems and the outside world was relatively clear. AI tools have dissolved that boundary in practical terms while leaving the legal framework largely unchanged. Covered entities and business associates are still fully responsible for PHI that flows through AI tools used by their employees, regardless of whether those tools were officially provisioned or understood by compliance teams. The organizations that navigate this environment successfully will be those that treat AI governance as core compliance infrastructure, not an IT side project.
That means investing in visibility before a breach occurs, not after. It means updating BAA review processes to include AI vendor assessments as a standard step. It means writing policies that reflect how people actually work and providing training that makes the risks concrete and understandable. And it means deploying tools that give compliance teams operational insight into AI usage patterns without creating new privacy risks in the process.
The convergence of AI adoption and regulatory scrutiny in healthcare is not going to slow down. HHS OCR has signaled increasing interest in how covered entities manage emerging technology risks, and state-level health privacy laws are adding additional layers of complexity. Building a governance program now — with clear policies, approved vendor lists, workforce training, and automated monitoring — is both a compliance imperative and a practical competitive advantage. Organizations that can demonstrate mature AI governance will be better positioned for audits, better protected against breaches, and better equipped to realize the genuine productivity benefits that AI tools offer without the liability exposure that comes from unmanaged adoption.
Take control of AI usage in your organization — Try Zelkir for FREE today and get full AI visibility in under 15 minutes.
