The New Ransomware Playbook: AI as a Force Multiplier

Ransomware has always been a volume game. The more targets an attacker can hit, the higher the probability that someone pays. For years, the limiting factor was human labor — writing convincing phishing emails, customizing malware payloads, researching targets, and managing negotiations all required skilled operators. Large language models have changed that calculus entirely.

Since the public release of powerful LLMs in 2022 and 2023, cybersecurity researchers and law enforcement agencies have documented a measurable shift in attack sophistication and volume. The FBI's Internet Crime Complaint Center reported a 74% increase in ransomware complaints between 2022 and 2023, and security vendors like CrowdStrike and Mandiant have specifically called out AI-assisted tooling as a contributing factor. This is no longer a theoretical threat — it is an operational reality.

For CISOs and security engineers, the challenge is two-pronged. Not only must they defend against AI-augmented external attacks, but they must also govern how their own employees interact with AI tools — because those interactions can inadvertently provide attackers with the intelligence they need to strike. Understanding the full attack surface requires looking at both sides of the AI equation.

How Attackers Are Using LLMs in the Kill Chain

The MITRE ATT&CK framework maps adversary behavior across reconnaissance, initial access, execution, persistence, and exfiltration. LLMs are now being deployed across nearly every phase of this kill chain, compressing the time required to move from target identification to full network compromise.

During reconnaissance, attackers use LLMs to rapidly synthesize open-source intelligence. Given a company name, a publicly accessible LLM — or a jailbroken version of a commercial one — can aggregate information from LinkedIn profiles, job postings, SEC filings, and GitHub repositories to produce a detailed organizational profile. This includes identifying key personnel, tech stack details, third-party vendors, and potential entry points, all in minutes rather than hours.

In the lateral movement and privilege escalation phases, LLMs assist attackers by generating custom scripts and explaining complex vulnerabilities in plain language. Threat actors with moderate technical skill can now prompt an AI model to explain how to exploit a specific CVE, generate a PowerShell payload, or craft a Living-off-the-Land (LotL) attack sequence — dramatically lowering the barrier to entry for sophisticated techniques previously reserved for nation-state actors.

AI-Generated Phishing: The End of Obvious Scams

The most immediate and measurable impact of LLMs on ransomware campaigns is in phishing. The era of easily identifiable phishing emails — characterized by broken English, generic greetings, and implausible scenarios — is effectively over. LLMs produce grammatically flawless, contextually aware, and emotionally calibrated text at industrial scale.

Researchers at IBM X-Force demonstrated in 2023 that LLM-generated spear phishing emails achieved click rates nearly on par with those crafted by experienced human social engineers — but at a fraction of the cost and time. Attackers can now generate thousands of highly personalized emails by feeding an LLM a target's name, role, recent company news, and communication style. The result is a message that references a real internal project, mimics a colleague's writing tone, or exploits a timely business event like a merger or earnings announcement.

Business Email Compromise (BEC) campaigns, which are frequently used as the initial vector for ransomware deployment, have become particularly dangerous. Attackers use LLMs to impersonate executives with startling accuracy, drafting urgent wire transfer requests or credential harvesting lures that pass even seasoned employees' scrutiny. Security awareness training that teaches employees to look for typos and odd phrasing is now largely insufficient as a defense.

LLM-Assisted Malware Development and Evasion

Beyond social engineering, LLMs are accelerating the malware development lifecycle. Security researchers at Check Point documented cases as early as early 2023 where threat actors on dark web forums were sharing LLM-generated malware code — including information stealers and ransomware encryptors — and openly discussing techniques for jailbreaking commercial AI models to bypass content filters.

One of the most concerning capabilities is AI-assisted obfuscation. Ransomware operators have traditionally relied on packers and crypters to evade endpoint detection tools, but these are increasingly detected by modern EDR solutions. LLMs can generate functionally equivalent code with different syntactic signatures on each run, making signature-based detection far less reliable. This polymorphic capability — which once required specialized malware authors — is now accessible to script-level operators.

LLMs are also being used to accelerate vulnerability research. Given the source code of an open-source library or a description of a software component, models can identify logical flaws, suggest exploitation paths, and generate proof-of-concept code. This has implications for zero-day discovery timelines: what previously took weeks of manual analysis can now be partially automated, shortening the window between vulnerability existence and active exploitation.

The Insider Threat Angle: When Employees Feed the Machine

Here is where the external threat intersects with internal AI governance failures. As enterprise employees adopt AI tools at scale — using ChatGPT, Claude, Gemini, Copilot, and dozens of other platforms for daily work — they often share sensitive information in their prompts without fully understanding the exposure. Network diagrams, authentication configurations, incident response playbooks, vendor contract details, and internal system architecture descriptions are all routinely entered into unmanaged AI sessions.

This data doesn't disappear. Depending on the platform's data retention policies and training data agreements, sensitive prompt content may be stored, reviewed by employees of the AI provider, or potentially included in future model training runs. More immediately, if an employee's AI account is compromised through credential theft — itself a common precursor to ransomware attacks — the conversation history becomes a goldmine for attackers mapping internal systems.

There is also the risk of unsanctioned AI tool usage that IT and security teams have no visibility into. An employee who installs a browser extension from an unverified developer, thinking it offers AI productivity features, may be routing their queries and clipboard content to a malicious third party. Without systematic monitoring of which AI tools are being used across the organization and how, security teams are operating blind. This visibility gap is precisely what platforms like Zelkir are designed to close — tracking AI tool usage and classifying the nature of that usage without capturing raw prompt content, so compliance teams can identify risk without creating new privacy problems.

How Security Teams Can Fight Back

Defending against AI-augmented ransomware requires updating both technical controls and governance frameworks. On the technical side, organizations should prioritize email security solutions that use behavioral analysis and contextual awareness rather than purely signature-based filtering. Products from vendors like Abnormal Security and Proofpoint now apply their own AI models to detect AI-generated phishing — effectively fighting fire with fire at the inbox level.

Endpoint detection and response (EDR) platforms must be configured with behavioral rules that catch LotL techniques and polymorphic payloads rather than relying on static signatures. Network segmentation remains one of the highest-value controls for limiting ransomware blast radius — even when attackers achieve initial access, a well-segmented environment can prevent lateral movement and protect critical data stores. Immutable, air-gapped backups tested on a regular cadence are non-negotiable.

On the governance side, security teams need a formalized AI Acceptable Use Policy that specifies which AI tools are sanctioned, what categories of data employees may not share with AI platforms, and what the process is for evaluating new AI tools before deployment. This policy is only effective if it is enforced through monitoring — not just stated in a document. Implementing browser-level visibility into AI tool usage allows security and compliance teams to identify policy violations, detect the use of unsanctioned tools, and classify the nature of AI interactions in real time. This doesn't mean recording every prompt employees type; it means understanding the behavioral patterns and risk categories associated with AI usage across the organization.

Conclusion: Governing AI Before Attackers Exploit the Gap

The use of LLMs by ransomware operators represents a genuine inflection point in the threat landscape. The speed, scale, and sophistication advantages that AI provides to attackers are not marginal improvements — they are structural shifts that require security teams to rethink assumptions about phishing detection, malware analysis timelines, and the insider threat surface. Organizations that treat AI-augmented attacks as a distant, theoretical risk will find themselves unprepared when they face them operationally.

At the same time, the internal AI governance gap is creating a parallel vulnerability that many enterprises have yet to fully address. Employees are using AI tools every day — often with no policy, no monitoring, and no visibility from IT or security. The same AI ecosystem that attackers are weaponizing externally is the one your employees are feeding with sensitive organizational data from the inside. Closing this gap requires both policy and technology: clear rules about AI usage and the monitoring infrastructure to enforce them.

Security and compliance leaders who act now — establishing AI acceptable use policies, auditing which tools are in use across the organization, and implementing visibility platforms that can classify AI usage without violating employee privacy — will be measurably better positioned to detect and contain AI-assisted ransomware campaigns. The governance layer is not separate from the security strategy; it is part of it. If your organization is ready to take that first step toward full AI visibility, Try Zelkir for FREE today and get full AI visibility in under 15 minutes.

Your employees are already using AI tools — the question is whether your security team has any visibility into how. Try Zelkir for FREE today and get full AI visibility in under 15 minutes.

Further Reading