Why the SEC Is Focusing on AI Disclosure
Artificial intelligence has moved from a boardroom talking point to an operational reality at thousands of publicly traded companies. Employees are using AI tools to draft earnings summaries, analyze contracts, generate code, and process customer data — often without formal governance structures in place. The Securities and Exchange Commission has noticed, and its response is reshaping what public companies must disclose about their AI-related risks, dependencies, and practices.
The SEC's interest in AI is not incidental. Chair Gary Gensler has repeatedly stated that AI poses systemic risks to financial markets, and the agency has signaled it views misleading or incomplete AI disclosures as potential violations of existing securities law — not a future regulatory problem. For compliance officers and legal counsel at public companies, this creates an immediate obligation: understand what the SEC expects, audit what your organization is actually doing with AI, and close the gap between the two.
This post breaks down the current state of SEC AI disclosure requirements, explains how internal AI tool usage by employees creates hidden compliance exposure, and outlines a practical framework for getting ahead of enforcement before the next 10-K or proxy filing cycle.
What the SEC Currently Requires Companies to Disclose
The SEC has not yet finalized a standalone AI disclosure rule, but that does not mean companies are operating in a vacuum. Existing frameworks — particularly the 2018 interpretive guidance on cybersecurity disclosures and the 2023 cybersecurity rules — establish clear precedents for how the agency treats emerging technology risks. The core principle is straightforward: if AI use is material to your business operations, risk profile, or financial condition, it must be disclosed.
Under Item 1A of Form 10-K, companies must describe risk factors that make an investment in their securities speculative or risky. AI-related risks now fall squarely into this category. These include overreliance on AI systems that could produce inaccurate outputs, use of third-party AI tools that expose proprietary data, AI-generated content that influences investor-facing communications, and the potential for regulatory action related to AI practices. Companies that reference AI as a competitive advantage in their business section without disclosing the associated risks are particularly exposed.
The SEC's 2023 enforcement action against two investment advisers — Delphia and Global Predictions — for making false claims about their use of AI is instructive. The agency characterized this as 'AI washing': making materially misleading statements about AI capabilities. The lesson for compliance teams is that the SEC evaluates AI disclosures not just for what companies say, but for whether those statements accurately reflect operational reality. Vague or aspirational language about AI, unsupported by actual governance, is a liability.
The Material Risk Standard and AI Usage
The concept of materiality is central to SEC disclosure obligations. A fact is material if there is a substantial likelihood that a reasonable investor would consider it important in making an investment decision. Applying this standard to AI is not always straightforward, and that ambiguity is where many compliance teams make mistakes.
Consider a financial services firm where analysts routinely use AI tools to generate research summaries that inform trading recommendations. If those AI tools produce errors — or if they expose client data through a third-party API — the downstream impact on the firm's revenue, reputation, and regulatory standing could be significant. Under the materiality standard, the firm's reliance on these tools, and the risks associated with them, likely warrants disclosure. Yet because the AI usage is decentralized and informal — employees adopting tools independently rather than through IT procurement — compliance officers may not even know it is happening.
This is the operational challenge behind the legal one. Materiality determinations require accurate information about what AI tools are being used, how they are being used, and what data they are processing. Companies that lack internal visibility into employee AI tool usage cannot make confident materiality assessments. This is not just a governance gap — it is a disclosure gap that creates direct SEC exposure.
How Internal AI Tool Usage Creates Disclosure Exposure
Most AI governance conversations focus on customer-facing AI products — the chatbots, recommendation engines, and automated decision systems that companies build and sell. But the SEC's disclosure concerns extend to internal AI tool usage by employees, and this is where many public companies are most unprepared.
When a finance team member uses an AI assistant to draft Management Discussion and Analysis language, when a legal associate uses a generative AI tool to summarize regulatory filings, or when an IR team uses AI to prepare earnings call talking points, the outputs of those tools can directly influence investor-facing communications. If those tools hallucinate facts, introduce errors, or reflect training data biases, the consequences are not limited to operational inefficiency — they extend to potential securities fraud exposure if the outputs end up in disclosed documents.
There is also a data residency and confidentiality dimension. Many consumer-grade AI tools — tools that employees adopt without IT approval — transmit input data to third-party servers for processing. In a corporate context, those inputs may include material nonpublic information (MNPI), trade secrets, or client data. If that information is inadvertently shared with an AI provider's training pipeline, the company may face insider trading implications, breach of confidentiality obligations, or regulatory sanctions. None of these risks can be managed without knowing which AI tools employees are actually using and what types of information they are submitting to those tools.
Building a Compliance Framework for SEC AI Disclosures
An effective compliance framework for SEC AI disclosures starts with visibility and ends with documentation. The intermediate steps — risk classification, policy enforcement, and cross-functional coordination — are where most organizations need to invest.
Start with an AI tool inventory. You cannot disclose what you do not know, and you cannot govern what you have not catalogued. This means going beyond IT-approved software to identify shadow AI usage: tools that employees are accessing through personal accounts or browser extensions, consumer AI products being used for work tasks, and AI features embedded in existing SaaS platforms that may not have been flagged during procurement. Browser-level monitoring tools that classify AI tool usage without capturing raw prompt content can surface this shadow usage while respecting employee privacy — giving compliance and security teams the data they need without creating surveillance concerns.
Once you have an inventory, map it against your disclosure obligations. Identify which AI tools touch sensitive data categories, which are used in workflows that influence financial reporting or investor communications, and which present model reliability risks. Work with legal counsel to assess materiality for each category. Document your assessment methodology — the SEC has signaled that evidence of a thoughtful, structured process is itself a mitigating factor in enforcement scenarios. Finally, establish a cross-functional AI governance committee that includes IT, security, legal, finance, and compliance stakeholders who review the AI tool inventory on a quarterly basis and update disclosure language accordingly.
What Enforcement Actions Signal About Future Expectations
The AI washing actions against Delphia and Global Predictions in 2024 established a clear precedent: the SEC will pursue companies that make materially misleading statements about AI, even under existing securities laws and without a new AI-specific rule. Both companies settled for a combined $400,000 in penalties — modest by SEC standards, but significant as a signal of enforcement intent. The agency explicitly stated that it will continue to police AI-related disclosures as part of its broader investor protection mandate.
Looking at the SEC's broader regulatory agenda, the pattern is consistent with how the agency handled cybersecurity disclosures over the past decade. The agency began with interpretive guidance, then issued comment letters to companies with inadequate cybersecurity disclosures, and ultimately formalized requirements through rulemaking. The 2023 cybersecurity disclosure rules — which require public companies to disclose material cybersecurity incidents within four business days and describe their cybersecurity risk management processes annually — are now the baseline expectation. AI disclosures appear to be following the same trajectory, and companies that wait for a formal AI rule to act are likely to find themselves behind the enforcement curve.
SEC comment letters are another leading indicator worth monitoring. The agency's Division of Corporation Finance has been issuing comments to companies that mention AI in their filings without providing adequate risk factor disclosure, specificity about AI's role in business operations, or discussion of governance controls. These comment letters are public and searchable — reviewing them gives compliance teams a real-time view of what the SEC's examiners are looking for.
Preparing Your Organization Before the Next Reporting Cycle
The practical window for action is tighter than most compliance teams realize. If your company files a 10-K in the first quarter, preparation for AI disclosure language should begin no later than the third quarter of the preceding year. That timeline requires having a complete AI tool inventory, completed materiality assessments, and documented governance controls in place well before the drafting process begins.
Prioritize three immediate actions. First, commission a shadow AI audit — a structured effort to identify AI tools being used by employees outside of formal IT procurement channels. This is consistently the largest gap in enterprise AI governance and the one most likely to surface disclosure-relevant information. Second, review your existing risk factor language for AI-related gaps. If your company mentions AI as a growth driver anywhere in its filings, your risk factor section must address the operational, data, model reliability, and regulatory risks associated with that AI usage in commensurate detail. Third, establish a data handling policy specifically for AI tools that addresses which categories of information employees are permitted to submit to external AI systems, and implement technical controls — including tool-level access governance — to enforce that policy.
The SEC's AI disclosure expectations will only become more specific and more demanding as the agency's rulemaking agenda progresses. Companies that build governance infrastructure now — with real visibility into employee AI usage, documented risk assessments, and cross-functional oversight — will not only meet current disclosure obligations more confidently but will be structurally better positioned when formal AI rules arrive. Compliance is not the end goal; the end goal is accurate, defensible disclosure that reflects what your organization is actually doing with AI. Getting there requires knowing the answer to that question in the first place.
Take control of AI usage in your organization — Try Zelkir for FREE today and get full AI visibility in under 15 minutes.
