The Compliance Stakes of AI in Financial Services
Artificial intelligence has moved from experimental pilot to operational reality across the financial services industry. Loan underwriting, fraud detection, customer service chatbots, document summarization, and investment research — AI tools are now embedded in workflows that directly touch regulatory obligations. And unlike other technology shifts, AI adoption is happening at the employee level, often faster than IT and compliance teams can track it. That velocity creates a category of risk that is new, difficult to quantify, and increasingly on the radar of regulators worldwide.
For financial institutions, the compliance stakes are unusually high. Banks operate under overlapping frameworks — federal prudential regulators, consumer protection law, securities regulation, data privacy regimes, and international equivalents — and virtually all of them have implications for how AI is used, audited, and governed. A loan officer pasting customer financial data into a public large language model is not just a security incident. It may constitute a violation of Gramm-Leach-Bliley, trigger a model risk management concern under SR 11-7, or create liability under the Fair Credit Reporting Act depending on the context.
The good news is that financial institutions have deep institutional muscle for compliance. The challenge is extending that muscle to a technology category that was not contemplated when most of these frameworks were written. This post outlines what compliance officers, CISOs, and legal counsel at banks and credit unions need to know — and do — right now.
Regulatory Frameworks Governing AI in Banking
No single federal statute governs AI in banking comprehensively, but a dense web of existing regulations applies directly to AI-enabled processes. The Office of the Comptroller of the Currency, the Federal Reserve, and the FDIC jointly issued guidance on model risk management (SR 11-7) that explicitly encompasses machine learning models used in credit decisioning, stress testing, and risk measurement. Any AI system that produces outputs informing a business decision qualifies as a model under this guidance and requires validation, documentation, and ongoing performance monitoring.
The Consumer Financial Protection Bureau has signaled aggressive scrutiny of algorithmic underwriting and pricing. In 2023, the CFPB issued guidance making clear that creditors cannot cite the complexity of an AI model as a defense for failing to provide specific reasons for adverse action notices. If your institution uses AI to support any credit decision, Regulation B and the Fair Credit Reporting Act require explainability — a standard that many large language model-based tools struggle to meet by design.
Beyond model risk and consumer protection, the European Union's AI Act — which took effect in 2024 — designates credit scoring and biometric identification as high-risk AI applications subject to mandatory conformity assessments, transparency obligations, and human oversight requirements. For any institution with EU operations or customers, this framework is already operational compliance territory, not horizon scanning. Add to this the patchwork of state-level AI and biometric privacy laws in Illinois, Texas, and California, and the regulatory surface area for financial AI compliance becomes substantial.
The Shadow AI Problem in Financial Institutions
The most immediate and underappreciated compliance risk in banking is not the AI systems that IT deployed and the model risk team validated. It is the AI tools that employees adopted on their own — often through a browser extension, a free-tier SaaS subscription, or a personal device — without IT knowledge or approval. This is commonly called shadow AI, and in financial services it is widespread.
According to usage data collected across enterprise environments, the majority of AI tool interactions in knowledge-worker organizations involve tools that were never formally procured, vetted, or approved by security teams. In a bank, this means analysts using ChatGPT to summarize credit memos, advisors using AI writing tools to draft client communications, and operations staff using AI document parsers to process loan applications. Each of these interactions carries data handling, confidentiality, and model governance implications that compliance teams have no visibility into unless they have tooling specifically designed to surface it.
The regulatory exposure from shadow AI is not theoretical. When an examiner asks a bank to document how customer data was handled in a lending process, and an employee was using an unauthorized AI tool to assist that process, the institution faces a documentation gap that cannot be retroactively filled. The solution is not to ban AI — that approach has consistently failed and simply drives usage further underground. The solution is to gain visibility into what tools are being used, classify the nature of that usage, and build governance processes that can respond to what the data shows.
Data Residency, Confidentiality, and Model Risk
Financial institutions handle three categories of sensitive information that require special treatment when AI is involved: personally identifiable information covered by privacy law, material non-public information subject to securities regulation, and confidential supervisory information protected under banking law. Any AI governance program must account for all three, because the failure modes are different and the regulatory consequences vary significantly.
For consumer data, the core question is whether input to an AI tool constitutes a disclosure to a third party under Gramm-Leach-Bliley. Most commercial AI APIs operated by third-party vendors qualify as service providers under GLB, which means financial institutions must have an appropriate agreement in place — and must conduct due diligence on the vendor's data handling practices, model training data policies, and subprocessor relationships. Many consumer-facing AI tools that employees adopt independently do not have these agreements in place, and their terms of service may permit use of input data for model training, which creates serious confidentiality exposure.
Model risk is a separate but related concern. SR 11-7 requires that models used in consequential processes be inventoried, validated, and monitored. A generative AI tool that a credit analyst uses to summarize loan documents may not look like a model in the traditional sense, but if its output influences a lending decision, it arguably falls within model risk scope. Compliance teams should work with their model risk management function to establish a classification framework that accounts for generative AI use cases, including those that operate as assistive tools rather than direct decision engines.
Building an AI Governance Program That Actually Works
An effective AI governance program for a financial institution requires four foundational capabilities: discovery, classification, policy enforcement, and audit trail. Discovery means knowing what AI tools are in use across the organization — not just what was formally procured, but what employees are actually using. Classification means understanding the nature of that usage: is a tool being used for internal drafting, customer-facing communication, data analysis, or something else? Policy enforcement means having the ability to act on that knowledge — restricting certain tools, requiring approval workflows, or flagging specific usage patterns for review. Audit trail means retaining records that support regulatory examination and internal investigation.
Most financial institutions have made progress on formal AI procurement governance — adding AI to their vendor management processes, updating third-party risk questionnaires, and engaging legal counsel on contract terms. Fewer have addressed the discovery and classification problem for employee-initiated AI usage. This is the gap that creates the most immediate regulatory exposure, because it is the gap that examiners are most likely to probe when they ask about AI governance maturity.
Practically, an AI governance program should begin with a usage audit. Before writing policies or deploying controls, compliance and IT teams need to understand the actual landscape of AI tool adoption in their organization. This means deploying tooling that can surface AI tool usage across the enterprise without capturing the content of employee interactions — an important distinction for institutions with employee privacy obligations and attorney-client privilege concerns. Once the usage landscape is understood, the institution can make informed decisions about which tools to formally adopt, which to restrict, and which require additional controls.
Audit Readiness: What Examiners Will Ask About AI
Regulatory examiners at the OCC, Federal Reserve, and FDIC are increasingly incorporating AI-specific questions into their examination procedures. Based on published guidance and examination findings from 2023 and 2024, financial institutions should expect inquiries in several areas: AI inventory and documentation, model validation for AI-enabled processes, third-party risk management for AI vendors, consumer protection compliance in AI-assisted decisioning, and operational resilience for AI-dependent workflows.
On inventory and documentation, examiners are likely to ask whether the institution has a complete inventory of AI tools in use, including employee-adopted tools, and whether there is documentation of the risk assessment conducted before those tools were deployed in business processes. Institutions that cannot produce a defensible inventory — one that accounts for usage patterns and not just formal procurement — are in a weak position. An AI governance platform that continuously monitors and logs tool usage provides the evidentiary foundation that a static spreadsheet cannot.
For third-party risk, examiners will focus on whether AI vendors have been subjected to the same due diligence as other third parties, including contractual protections, data handling reviews, and business continuity assessment. The 2023 interagency guidance on third-party risk management applies directly to AI vendors. Institutions should also be prepared to demonstrate that they have assessed the fair lending implications of any AI tool used in a consumer credit context, even if that tool is classified as an assistive technology rather than a decisioning model. Regulators have made clear that the label matters less than the function.
Conclusion: Governance Is the Foundation of Responsible AI Adoption
AI presents genuine opportunity for financial institutions — in efficiency, accuracy, customer experience, and risk management. Realizing that opportunity without creating regulatory exposure requires governance infrastructure that matches the pace of adoption. For most institutions, the most urgent priority is not writing a comprehensive AI policy or standing up a formal AI ethics committee. It is gaining visibility into what is already happening and building the documentation capabilities that support both internal accountability and external examination.
The compliance frameworks that govern banking were built for a world of deterministic systems and human decision-makers. Adapting those frameworks to AI requires both regulatory interpretation and operational investment. Institutions that treat AI governance as a compliance checkbox — issuing a policy, running a one-time training, and moving on — will find themselves exposed when the next examination cycle arrives or when an AI-related incident requires them to reconstruct a paper trail that does not exist.
The institutions that will navigate this environment successfully are those that approach AI governance the same way they approach any other material operational risk: with continuous monitoring, documented controls, clear accountability, and the ability to demonstrate all of the above to an examiner on short notice. That posture requires tooling, not just policy. And it starts with knowing what your employees are actually doing with AI today.
Take control of AI usage in your organization — Try Zelkir for FREE today and get full AI visibility in under 15 minutes.
