Why FINRA-Regulated Firms Face Unique AI Governance Pressure

Broker-dealers, registered investment advisers, and other FINRA-regulated firms are adopting generative AI tools at a pace that most compliance programs were never designed to handle. Analysts are using ChatGPT to summarize earnings calls. Advisors are drafting client communications with AI assistants. Back-office teams are automating compliance document reviews with tools their IT departments didn't approve and may not even know exist.

The problem is not that employees are using AI — in many cases, the productivity gains are real and significant. The problem is that most firms have no systematic visibility into which AI tools are being used, by whom, for what purpose, and whether that usage creates regulatory exposure under FINRA's existing rulebook. That combination of rapid adoption and near-zero visibility is precisely what regulators have flagged as a governance failure waiting to happen.

FINRA-regulated firms operate under a uniquely demanding compliance environment. Books-and-records obligations, suitability and best interest standards, supervision requirements, and data protection rules all create intersecting obligations that AI usage can inadvertently violate. Getting AI governance right is not a future priority — it is a present-tense regulatory necessity.

The Regulatory Landscape: What FINRA and the SEC Expect

FINRA has not yet issued a comprehensive AI-specific ruleset, but it has made clear through regulatory notices, examination priorities, and guidance letters that existing rules apply fully to AI-assisted activities. FINRA Rule 3110 requires firms to establish and maintain a supervisory system — including written supervisory procedures — for all business activities. If employees are using AI tools to draft communications, generate research, or assist in client-facing decisions, those activities fall squarely within the supervisory umbrella.

The SEC has been more explicit. Its 2023 proposed rules on predictive data analytics and the staff bulletin on AI in investment advice signaled that the agency views AI as a material factor in how firms manage conflicts of interest and fulfill fiduciary obligations. For dual registrants subject to both FINRA and SEC oversight, the governance burden compounds quickly. Any AI tool that influences a recommendation — even indirectly, by summarizing research a rep then acts on — may implicate Regulation Best Interest.

Books-and-records rules under SEA Rule 17a-4 and FINRA Rule 4511 add another dimension. If an employee uses an AI tool to draft or substantively edit a client communication, regulators may expect that interaction to be captured and retrievable. Most third-party AI tools do not natively integrate with a firm's archiving infrastructure, creating a records gap that examination teams are increasingly looking for. Firms that cannot produce a coherent account of how AI tools were used during a given period will face credibility problems in regulatory examinations.

The Hidden Risk: Shadow AI in Broker-Dealers and RIAs

Shadow AI — the use of AI tools outside of formally approved channels — is the most underappreciated compliance risk in financial services today. Unlike shadow IT of the past, which typically involved file-sharing apps or personal email, shadow AI tools can directly touch the substance of regulated activities. An advisor who pastes client portfolio details into a free-tier AI chatbot to get allocation ideas has potentially transmitted customer data to an unvetted third party, generated advice-adjacent content outside any supervisory framework, and created no retrievable record of the interaction.

The scale of this problem at most firms is larger than compliance officers suspect. Research from multiple cybersecurity firms consistently shows that employees underreport AI tool usage, particularly when they believe it may be prohibited. In financial services, where the stakes of disclosure are higher, employees have strong incentives to keep their AI workflows private. This creates an environment where the compliance team's mental model of AI usage at the firm is systematically incomplete.

The risk is not theoretical. In 2024, FINRA's examination priorities letter specifically called out technology governance as an area of focus, noting that firms must ensure their supervisory systems account for new and emerging technologies. Examiners who arrive at a broker-dealer and ask to see the firm's AI usage policy — along with evidence that it is being enforced — are increasingly walking away with findings when firms cannot produce meaningful documentation.

Four Core Governance Controls Every Firm Should Implement

Effective AI governance for FINRA-regulated firms does not require blocking all AI tool usage. It requires building the visibility and control infrastructure necessary to supervise that usage. There are four controls that should anchor any firm's AI governance program.

First, maintain a real-time inventory of AI tools in active use across the organization. This goes beyond the approved software list. It means continuously monitoring which AI applications employees are actually accessing, including browser-based tools, API-connected services, and AI features embedded in productivity suites like Microsoft 365 Copilot or Salesforce Einstein. Without this inventory, you cannot supervise what you cannot see. Second, classify AI usage by functional category — distinguishing between, for example, document drafting, client communication assistance, research summarization, and trading-adjacent analysis. This classification layer is what allows compliance teams to apply proportionate oversight: client communication tools warrant heavier supervision than internal productivity tools.

Third, establish a formal AI tool approval workflow with documented risk assessments for each tool. This should include vendor due diligence focused on data handling practices, a privacy and security review, and a determination of whether the tool's outputs could constitute regulated activity. Fourth, build an audit trail that captures the fact of AI tool usage — which tools were used, by which roles, at what frequency, and in what functional context — without necessarily capturing the raw content of every interaction. This last point is important: a governance program that requires logging every prompt will face strong pushback and may implicate its own privacy concerns. The goal is metadata-level visibility that supports supervision without creating a surveillance apparatus.

Balancing Productivity and Compliance in AI Tool Adoption

One of the most common mistakes compliance teams make is treating AI governance as purely a restriction function. When the compliance response to AI adoption is a blanket prohibition or an approval process so slow that it becomes a de facto ban, employees don't stop using AI — they become more careful about hiding it. The firm ends up with the worst of both worlds: no productivity benefit from AI adoption and no compliance visibility into the shadow usage that continues anyway.

A more effective approach is to create a tiered approval framework that makes it easier for employees to use lower-risk AI tools quickly while applying more rigorous review to tools that touch regulated activities. Internal productivity tools — AI writing assistants used for internal memos, code generation for non-trading applications, meeting summarization tools — can often be approved through a lightweight process. Tools that touch client data, investment analysis, or outbound communications require a more structured review, but that review should have a defined timeline and clear criteria so that business lines know what to expect.

Firms that have successfully navigated this balance typically combine a formal governance structure with active education. When advisors and analysts understand which tools are approved and why, and when they have a clear path to get new tools reviewed, they are far more likely to work within the governance framework rather than around it. Compliance becomes a partner in AI adoption rather than an obstacle to it.

Building an Audit-Ready AI Usage Program

When FINRA examiners ask about your firm's AI governance program, the response needs to be more than a policy document. Examiners are increasingly sophisticated about technology and will probe whether written policies reflect actual practice. An audit-ready program has three essential characteristics: documented policies that are current and specific, evidence of ongoing monitoring, and records that demonstrate the program is functioning as designed.

On the policy side, written supervisory procedures should explicitly address AI tool usage. This means naming the categories of AI tools that are permitted, restricted, or prohibited; describing the approval process for new tools; and specifying the supervisory procedures that apply to AI-assisted client communications and advice. Generic technology use policies that predate the generative AI era are not sufficient. Policies should be reviewed and updated at least annually, with a formal sign-off process that creates a dated record.

On the monitoring side, firms need technical infrastructure that produces evidence of oversight — not just the capability to monitor if they chose to. Periodic usage reports, anomaly alerts when employees access unapproved AI tools, and records of how usage patterns have changed over time all contribute to an examination-ready posture. Tools like Zelkir that provide compliance teams with continuous visibility into AI tool usage across the organization — categorized by function and mapped to user roles — are increasingly becoming part of the compliance technology stack at forward-looking firms. The goal is to be able to walk an examiner through exactly how AI is being used at your firm and demonstrate that your supervisory system accounts for it.

Making AI Governance a Competitive Advantage

Firms that build robust AI governance programs early will gain advantages that extend beyond regulatory compliance. Institutional clients, particularly large asset owners and pension funds, are increasingly conducting their own due diligence on the technology practices of their broker-dealers and managers. A firm that can demonstrate a mature, documented AI governance framework — one that shows both productivity-enabling AI adoption and rigorous oversight — will stand out favorably in competitive pitches and RFP responses.

There is also a talent dimension. Compliance professionals and technology risk specialists are increasingly evaluating potential employers on the maturity of their technology governance programs. A firm with a coherent AI governance strategy signals operational sophistication and reduces the professional risk that compliance staff face when oversight gaps lead to regulatory findings.

Perhaps most importantly, firms that establish governance infrastructure now will be far better positioned to adopt more powerful AI tools as they emerge. The firms that will struggle most with future AI regulations are those that have no visibility into current AI usage and no muscle memory for integrating governance into technology adoption. Building that foundation today — with clear policies, real-time monitoring, and audit-ready documentation — is not just a compliance exercise. It is strategic preparation for an industry that will be increasingly defined by how well firms manage the intersection of artificial intelligence and regulatory obligation.

Take control of AI usage in your organization — Try Zelkir for FREE today and get full AI visibility in under 15 minutes.

Further Reading