Why Insurance Firms Are Racing to Adopt AI Tools

Insurance is one of the most data-intensive industries on earth, and AI tools have arrived at exactly the right moment. Underwriters are using large language models to summarize complex policy documents and analyze risk submissions. Claims adjusters are feeding incident reports into AI assistants to draft coverage determinations faster. Actuarial teams are experimenting with AI-assisted modeling. Across the enterprise, productivity gains are real, measurable, and difficult for leadership to ignore.

The pace of adoption, however, has outrun governance in most organizations. According to a 2024 Deloitte survey, over 70 percent of financial services employees reported using at least one AI tool in their daily workflow — yet fewer than 40 percent of their employers had a formal AI usage policy in place. In insurance, where every data point touching a policyholder carries regulatory weight, that gap is not just a compliance concern. It is an active liability.

The core tension is this: the same characteristics that make AI tools powerful — their ability to process, synthesize, and generate from large volumes of information — make them dangerous in the absence of governance controls. When an underwriter pastes a client's loss history into a public AI chatbot, the data doesn't stay in the room. Understanding where that risk lives, and building systems to manage it, is now a core responsibility for insurance CISOs and compliance officers.

The Compliance Landscape: Regulations You Can't Ignore

Insurance companies operating in the United States face a fragmented but increasingly assertive regulatory environment around AI. The NAIC's Model Bulletin on the Use of Algorithms, Predictive Models, and AI in Insurance, adopted by a growing number of states, requires insurers to maintain governance frameworks that ensure AI tools produce fair, explainable, and non-discriminatory outcomes. States including Colorado, California, and New York have layered additional requirements on top of federal baselines, particularly around bias audits and consumer notification.

On the data privacy side, state-level laws such as the California Consumer Privacy Act and its successor the CPRA impose strict requirements on how personal information — including health and financial data common in insurance workflows — is collected, processed, and shared. When employees use third-party AI tools that process policyholder data, those tools may constitute data processors under applicable law, triggering vendor assessment, contractual, and data residency obligations that most AI SaaS products are not designed to satisfy out of the box.

For firms with international operations or reinsurance relationships, the EU AI Act introduces another layer. The Act classifies certain insurance-related AI uses — particularly those influencing underwriting decisions or claims outcomes — as high-risk, requiring conformity assessments, technical documentation, and human oversight mechanisms. Compliance officers who treat AI governance as a domestic issue are already behind.

Top Risk Vectors When Employees Use AI Without Governance

The most significant risk in most insurance organizations today is not a rogue deployment of an internal AI system. It is the unmanaged proliferation of employee-facing AI tools — ChatGPT, Claude, Gemini, Copilot, and dozens of specialized vertical tools — used without IT awareness, security review, or policy guardrails. These tools are frictionless by design. An employee doesn't need procurement approval to open a browser tab and start working with sensitive data.

Shadow AI usage creates several distinct risk vectors. First, data exfiltration: employees routinely paste policy details, claimant information, medical records summaries, and proprietary actuarial assumptions into AI interfaces without understanding that this data may be used to train models or retained by the provider. Second, regulatory exposure: if AI-generated outputs influence an underwriting or claims decision and no audit trail exists, the firm cannot demonstrate compliance with explainability and fairness requirements. Third, vendor risk: third-party AI tools rarely complete insurance-grade due diligence processes, meaning the firm may be in violation of its own third-party risk management policies without knowing it.

There is also a subtler risk around professional reliance. When claims professionals or underwriters begin systematically deferring to AI-generated analysis without critical review, the firm's decision quality becomes opaque and difficult to defend in litigation or regulatory examination. Governance frameworks must address not just what tools are used, but how and when human judgment is expected to override AI output.

Building an AI Acceptable Use Policy for Insurance Teams

An AI acceptable use policy for an insurance firm needs to be more specific than a generic technology policy. It should distinguish between approved AI tools — those that have passed security review, have a signed data processing agreement, and are deployed on enterprise infrastructure — and unapproved tools that employees may access independently. The policy should explicitly address data classification: which categories of information may never be entered into an AI tool, including non-public personal information, protected health information, and material non-public actuarial data.

The policy should also define use case boundaries. There is a meaningful difference between using an AI writing assistant to improve internal communications and using one to draft coverage denial letters that will reach policyholders. The latter has regulatory, legal, and customer experience implications that require specific controls. Teams should be given clear guidance on which use cases are permitted without additional review, which require manager or legal sign-off, and which are prohibited outright.

Critically, the policy needs teeth. Acceptable use policies that exist only as PDF documents in a compliance portal have little operational effect. The policy must be paired with technical controls — monitoring, access management, and enforcement mechanisms — that create accountability. Employees should understand that AI tool usage is subject to the same oversight standards as any other enterprise system, and that the firm has visibility into how those tools are being used, even if not into the specific content of what is entered.

How to Audit and Monitor AI Tool Usage at Scale

Auditing AI tool usage in an insurance organization requires a different approach than traditional application monitoring. The challenge is that most AI tools are accessed via web browser, often using personal or freemium accounts, and do not generate logs in enterprise systems. Standard DLP solutions that monitor email and file transfers are poorly designed to detect or classify this kind of usage. Security teams are often operating blind.

Browser-based governance tools address this gap by operating at the point of access — the browser — without needing to intercept or capture the content of what employees are actually typing. This is an important distinction for insurance firms navigating employee privacy considerations and attorney-client privilege concerns. The goal of AI governance monitoring is not to read employee prompts. It is to understand which AI tools are being accessed, at what frequency, by which teams, and for what broad category of purpose. That visibility alone is sufficient to identify high-risk usage patterns, enforce policy, and produce the audit evidence regulators are beginning to request.

Compliance officers should establish a baseline of AI tool usage across the organization before attempting enforcement. In most mid-market insurance firms, this discovery phase reveals a significantly larger surface area than leadership expected — dozens of distinct AI tools used across underwriting, claims, HR, finance, and legal, many of which have never been reviewed by IT or legal. That inventory becomes the foundation for a risk-tiered governance program: approved tools with controls, conditionally approved tools under review, and blocked tools that represent unacceptable risk.

Incident Response When AI Governance Fails

Even mature AI governance programs will encounter incidents. The question is not whether an employee will eventually input sensitive data into an unapproved AI tool — it is how quickly the organization can detect it, assess the impact, and respond appropriately. Insurance firms should integrate AI-related data incidents into their existing incident response and breach notification frameworks rather than treating them as a separate category.

Detection is the hardest part. If your monitoring program provides visibility into which AI tools employees are accessing, you have a starting point. An anomalous spike in usage of a non-approved AI tool by a member of the claims team, for example, can trigger an investigation before a formal breach occurs. Once detected, the response playbook should include rapid assessment of what data categories may have been exposed, review of the AI vendor's data retention and training practices, legal analysis of notification obligations under applicable state and federal law, and documentation of remediation steps taken.

Regulators in the insurance space are paying close attention to how firms handle AI-related incidents. The NAIC framework and state-level guidance increasingly expect insurers to demonstrate proactive governance rather than reactive damage control. Firms that can show they had monitoring in place, detected the issue early, and responded with a documented process will be in a materially stronger position than those responding to a regulator's inquiry with no evidence of prior controls.

Operationalizing AI Governance: A Practical Framework

Building a sustainable AI governance program in insurance does not require solving every problem at once. The most effective programs follow a phased approach. In the first phase, the focus is discovery and policy: deploying usage monitoring to understand the current AI landscape, completing a risk assessment of identified tools, and drafting an acceptable use policy tailored to insurance-specific data categories and use cases. This phase typically takes sixty to ninety days and produces the organizational clarity necessary to make defensible decisions.

The second phase focuses on controls and accountability. This means implementing technical enforcement mechanisms — browser-based monitoring, network-level blocking of high-risk tools, and integration of AI governance data into existing SIEM and GRC platforms. It also means establishing clear ownership: which team is responsible for AI tool assessments, who approves exceptions, and how usage data is reviewed on a recurring basis. Compliance should own the policy framework; IT and security should own the technical controls; legal should be a standing participant in any high-risk use case review.

The third phase is continuous improvement and regulatory alignment. AI tools and the regulatory environment around them are both evolving rapidly. Governance programs must build in regular review cycles — at minimum quarterly — to assess new tools entering the market, incorporate regulatory guidance as it is issued, and update policy to reflect what the organization is actually learning about risk in practice. Insurance firms that treat AI governance as a one-time compliance project will find themselves perpetually behind. Those that build it as an operational capability will be positioned to adopt AI tools confidently, capture the productivity benefits, and demonstrate to regulators, partners, and policyholders that they are doing so responsibly.

Take control of AI usage in your organization — Try Zelkir for FREE today and get full AI visibility in under 15 minutes.

Further Reading