Why AI in Education Creates New FERPA Risk
Artificial intelligence has moved faster into education than most compliance frameworks anticipated. Teachers are using AI writing assistants to provide feedback on student essays. Advisors are querying large language models to draft personalized outreach to struggling students. Administrators are feeding enrollment data into AI tools to generate reports. Each of these activities may be routine from a workflow perspective, but each one carries a meaningful risk of violating the Family Educational Rights and Privacy Act — FERPA.
The core problem is not malicious intent. Educators are not trying to expose student records. The problem is invisibility. When an employee opens a consumer-grade AI tool in their browser and pastes in a student's name, grade, or behavioral history to get help drafting a message, there is no institutional record of that interaction. There is no vendor agreement in place. There is no review of what the AI provider does with that data. And from the institution's perspective, it never happened — until a breach occurs or an audit surfaces it.
For CISOs, compliance officers, and legal counsel at K-12 districts, colleges, and universities, this is no longer a theoretical concern. The U.S. Department of Education has made clear that FERPA obligations extend to how institutions manage technology partners and employee behavior. The question is whether your governance infrastructure is built to meet that standard in the age of AI.
What FERPA Actually Requires of Institutions
FERPA grants students — and parents of minor students — the right to control access to their education records. It prohibits institutions from disclosing personally identifiable information from those records to third parties without written consent, with specific exceptions. Those exceptions include disclosures to school officials with a legitimate educational interest and to vendors operating under what are called 'school official' designations — provided a formal agreement is in place.
This is where the AI era creates structural tension. For a vendor to qualify as a school official under FERPA, the institution must have a written agreement that specifies the legitimate educational interest being served, prohibits the vendor from redisclosing or using data for any other purpose, and requires the vendor to maintain FERPA-compliant controls. Most enterprise SaaS platforms — the Canvases and Blackboards of the world — have these agreements. Most consumer AI tools, including many popular large language model interfaces, do not.
The institution also bears responsibility for ensuring that employees understand which tools are approved, what categories of student information can be input into any given system, and what constitutes an education record in the first place. FERPA's definition is broad: it includes grades, transcripts, disciplinary records, special education plans, financial aid records, and any other information directly related to a student that is maintained by the institution. If an employee enters that kind of data into an unsanctioned AI tool, the institution has likely committed a disclosure violation regardless of whether that employee intended harm.
How Employees Are Using AI With Student Data
To govern AI use effectively, institutions first need an honest picture of what is actually happening. Based on patterns observed across the education sector, the most common high-risk behaviors fall into several categories. The first is drafting communications: staff use AI tools to write emails, progress reports, or intervention letters that reference specific students by name and include details about academic performance or behavioral issues. The second is generating reports: administrators and counselors paste data exports or summaries into AI tools to get help formatting or analyzing information.
The third — and arguably most dangerous — is using AI as an ad hoc data analyst. An advisor might upload a spreadsheet of at-risk student indicators to an AI platform with file-upload capability to identify patterns. A registrar might ask an AI tool to help reconcile enrollment records. These are the kinds of tasks that feel like responsible productivity, and they may well be, but only if the tool being used has been vetted, contracted, and approved.
There is also a growing category of AI use in student-facing contexts: instructors using AI to generate personalized feedback, AI-assisted grading tools embedded in learning management systems, and tutoring bots that interact directly with students. While some of these are deployed through sanctioned platforms, others are ad hoc experiments by individual faculty. Each one represents a data flow the institution needs to account for in its FERPA analysis.
The Consent and Vendor Agreement Problem
The vendor agreement gap is arguably the most significant compliance vulnerability in education AI governance today. When an employee uses a personal or free-tier AI subscription to complete work tasks, that usage exists entirely outside the institution's contractual control. The institution cannot point to a data processing agreement, a FERPA school official designation, or any prohibition on secondary use of the data. If the AI provider's terms of service permit training on user inputs — and many do by default — then student data submitted through that interface may be used to train future models. That is a FERPA violation with no easy remediation.
Institutions often assume that approved tools lists solve this problem, but an approved tools list is only effective if employees actually follow it and if the institution can verify compliance. Without visibility into which AI tools are being accessed, an approved tools list is a policy document, not a control. A faculty member who prefers one AI writing tool over the institution's approved alternative has no technical barrier preventing them from using it. And in many institutions, there is no mechanism for detecting that this is happening at all.
The consent pathway — asking students or parents to consent to specific AI tool use — is sometimes raised as an alternative, but it creates its own problems. FERPA consent must be written, specific, and voluntary. Blanket consent obtained through general enrollment agreements is not considered valid under the Department of Education's guidance. For institutions serving large student populations, managing individualized AI tool consent at the employee level is not operationally practical. The better answer is institutional control: approve tools that meet FERPA standards, and monitor to ensure those are the tools being used.
Building an AI Governance Framework for FERPA
An effective AI governance framework for FERPA compliance in education requires four interconnected components: policy, procurement, training, and monitoring. Policy must explicitly address AI tools as a category — not just the generic 'technology and data use' language that most institutions already have. The policy should define what constitutes an education record, specify which categories of information may never be entered into external AI tools, and draw a clear line between approved platforms and consumer-grade or personal AI subscriptions.
Procurement and legal review need to become standard gates before any AI tool is used in an institutional context. This means requiring vendors to sign FERPA-compliant data processing agreements and school official designations before deployment, reviewing terms of service for data training provisions, and evaluating data residency and retention policies. Institutions that have built this review into their IT procurement process for traditional software need to extend it explicitly to AI tools, including lightweight tools embedded in other platforms.
Training and monitoring complete the loop. Employees need to understand not just that FERPA exists, but specifically how AI tools create FERPA risk in their day-to-day work. Role-specific training for faculty, advisors, administrators, and IT staff is more effective than generic annual compliance modules. Monitoring provides the institutional assurance that policy and training are actually working. This means having technical visibility into which AI tools employees are accessing, how frequently, and in what functional contexts — without necessarily capturing the content of those interactions, which raises its own privacy concerns.
How Zelkir Helps Institutions Maintain Compliance
Zelkir is designed precisely for the monitoring challenge that FERPA compliance in the AI era creates. Deployed as a lightweight browser extension, Zelkir tracks which AI tools are being accessed by employees across the institution — categorizing usage by tool, by department, by frequency, and by the functional nature of the interaction. This gives compliance and IT teams the visibility they need to identify shadow AI usage, spot high-risk patterns, and enforce approved tools policies before a disclosure incident occurs.
Critically, Zelkir does not capture or store raw prompt content. This design choice matters for institutions operating under FERPA, because capturing the full content of employee AI interactions could itself create records that implicate student privacy or employee privacy rights. Zelkir's approach is to surface behavioral signals — what tools are being used, in what volume, in what contexts — without retaining the underlying data. Compliance teams get the governance visibility they need without creating secondary privacy risks.
For institutions conducting FERPA audits or responding to Department of Education inquiries, Zelkir's audit logs provide a defensible record of AI governance activity. Institutions can demonstrate that they have implemented technical controls to monitor for unsanctioned AI tool use, that they have identified and responded to policy deviations, and that their governance framework reflects the kind of active institutional oversight that FERPA demands. In an era where regulators are increasingly attentive to how educational institutions manage emerging technology risks, that documentation has real compliance value.
Conclusion: Governance Is the New Compliance Strategy
FERPA was written in 1974 to protect paper records in filing cabinets. Its core principle — that students have a right to control who sees their educational information — is more relevant than ever. But the mechanisms by which that information can be exposed have multiplied in ways the original statute could not have anticipated. Consumer AI tools are a new and particularly acute exposure point because they are powerful, accessible, and almost entirely invisible to institutional oversight under traditional IT governance models.
The institutions that will navigate this environment successfully are not the ones that ban AI outright — that approach is both impractical and counterproductive. They are the ones that build governance infrastructure capable of keeping pace with how AI tools are actually being used. That means clear policies, rigorous vendor agreements, role-specific training, and real-time visibility into employee AI behavior. These are not aspirational goals for a future compliance roadmap. They are table stakes for FERPA compliance today.
For compliance officers and CISOs in education, the right question to ask is not whether your institution has a policy about AI. It is whether you can actually verify that the policy is being followed. If the answer is no, that gap is your most significant FERPA risk — and closing it starts with visibility.
Take control of AI usage in your organization — Try Zelkir for FREE today and get full AI visibility in under 15 minutes.
