The Machines Are Learning: How AI Became Cybercrime’s New Weapon

The Machines Are Learning: How AI Became Cybercrime’s New Weapon

Jul 28, 2025
10 min read

Prologue: A New Kind of Threat

In late 2024, the CEO of a mid-sized European logistics firm received a Zoom call that appeared completely routine—at least on the surface. The video feed showed their CFO, a person they had worked with daily for over five years. The voice was unmistakable. The mannerisms, the blinking pattern, even the background—it all looked real. The CFO was urgently requesting an immediate transfer of $22 million to finalize a last-minute merger deal with an Asian shipping company.
Pressed for time and fueled by trust, the CEO complied. But none of it was real.
The video was a hyper-realistic deepfake. The voice? AI-generated from samples publicly available on webinars and corporate presentations. The urgency? Scripted perfectly using social engineering models trained on thousands of past scams.
By the time the truth came out, it was too late. The funds had already been laundered through dozens of wallets and accounts across multiple jurisdictions. The attackers? Untraceable.
This was no ordinary scam. It was a signal—a grim reminder that the era of digital deception had evolved beyond phishing emails and malware pop-ups.
This was cybercrime 2.0.
Welcome to a world where machines don’t just learn—they deceive, manipulate, and steal.

From Code to Cognition — When Cyberattacks Got Smart

blog1-image1.jpg

There was a time when cyberattacks followed a formula. A bad actor would write some code—maybe a keylogger or a trojan—then send it via spammy emails or shady downloads. Most of these attacks were manually crafted, poorly worded, and relatively easy to detect if you were paying attention.
But from 2023 onwards, the game changed.
The introduction of generative AI tools like ChatGPT, Claude, and open-source models such as GPT-J and LLaMA unleashed an unintended consequence: cybercrime innovation. Criminals, scammers, and rogue developers began fine-tuning these tools to create their own malicious versions—models that would become digital weapons in the wrong hands.
Thus emerged names like WormGPT, FraudGPT, and DarkBARD—models explicitly trained on stolen code, exploit databases, and social engineering scripts. Unlike traditional malware, these tools didn’t need an expert. They only needed a prompt.
A teenage hacker in a basement could type:
“Write a phishing email pretending to be from PayPal about a failed transaction.”
And the AI would return a flawless, brand-compliant email with all the right formatting and tone.
Or this:
“Generate obfuscated Python malware that installs a keylogger and hides from antivirus.”
Again, the model delivered.
Suddenly, the skill gap between amateurs and elite hackers collapsed. AI became the great equalizer—arming low-level threat actors with tools that once required years of expertise to master.
What’s more, these models could now:

  • Write emails in multiple languages with perfect grammar
  • Detect tone and adjust for urgency or familiarity
  • Obfuscate code to evade detection
  • Generate phishing websites that mimicked real ones down to pixel accuracy
  • Write scripts that auto-adapt to system settings and bypass user prompts

Rise of the Machines – Case Studies from the Frontline

blog1-image2.jpg

When cyberattacks become headlines, we often see the outcome: lost money, leaked data, disrupted services. But rarely do we get to understand the how—the mechanics behind these modern digital heists. Let’s dive into three cases that illustrate how AI is being used in the wild.

1. The WormGPT Revolution

In mid-2023, a major leak shook the dark web: WormGPT, a fine-tuned version of an open-source LLM, trained exclusively on malicious code repositories, exploit writeups, and pentesting scripts. Within weeks, threat actors were sharing prompts, results, and even prepackaged attack templates built using the model.
Unlike traditional malware kits, WormGPT didn’t require technical knowledge. It required imagination.
Users could simply type:
"Create a PowerShell script that downloads a payload from an external server and disables Windows Defender."
WormGPT complied, producing working code with inline comments, optional obfuscation, and even advice on how to deploy the script via phishing.
This lowered the entry barrier to cybercrime drastically. Wannabe hackers didn’t need to learn how to write shellcode, understand memory buffers, or bypass AV manually. They just needed to describe what they wanted—in plain English.
Forums were soon flooded with real-world usage reports:

  • “Used WormGPT to automate phishing campaigns—ROI up 30%.”
  • “Created ransomware variant that changes signature every execution.”
  • “Bypassed EDR using obfuscation tricks suggested by the model.”
    By late 2023, cybersecurity vendors had flagged thousands of new malware samples linked to this AI-assisted wave. Most alarming? Many of these variants had never been seen before.
    We weren't just fighting code anymore—we were fighting creativity on steroids.

2. The Deepfake CEO Fraud

In another chilling case, an employee at a multinational finance firm in the UK received a late afternoon call from their regional CEO. The voice was familiar—urgent, direct, and perfectly timed to align with a running project. The CEO requested a high-priority wire transfer to a new vendor overseas. The transfer needed to be immediate to secure a contract.
What the employee didn’t know:

  • The CEO was vacationing in New Zealand at the time.
  • The voice was a deepfake, generated from voice samples lifted from internal training videos and webinars.
  • The attackers had scraped internal comms, Slack messages, and project schedules to time the attack perfectly.
    The attack bypassed two-factor authentication, secure email gateways, and spam filters—not because of a technical flaw, but because it manipulated human trust.
    This wasn’t just an email scam. It was a synthetic identity attack, powered by AI tools that combined:
  • Voice synthesis
  • Video deepfake generation
  • Social engineering scripts
  • Behavioral timing
    According to a 2024 report by Interpol, over $100 million was lost globally to deepfake-enabled scams in that year alone. Many of these went unreported due to reputational risk.
    The scariest part? These tools are now offered as “Deepfake-as-a-Service” on darknet marketplaces, with pricing tiers based on target profile complexity.

3. The Rise of Adaptive Malware

Traditional malware had one major weakness: predictability. Once reverse engineered, it could be detected, blocked, and neutralized across systems.
But in 2025, we are seeing malware evolve—literally.
A new breed of adaptive malware is now being reported by analysts. These aren’t static binaries. They’re dynamic agents that:

  • Use machine learning models to mutate their code automatically when they detect a honeypot or sandbox
  • Introduce randomness in behavior to avoid behavioral signatures
  • “Sleep” if suspicious logging or analysis tools are found, resuming operation only after hours of inactivity
  • Reconfigure their command-and-control protocols to mimic legitimate traffic (e.g., using DNS tunneling or Slack webhooks)

Breaking Down the Beast – How AI Attacks Work

blog1-image3.jpg

To the untrained eye, AI-driven attacks might seem like magic—emails too perfect, scams too personalized, malware too evasive. But behind the scenes, these attacks follow a precise, methodical flow. The difference? Every stage is now supercharged by machine learning.
Let’s walk through a full AI-enabled attack cycle, stage by stage.

1. Reconnaissance – Machines That Research Better Than Humans

Before launching an attack, cybercriminals need context. They need names, email patterns, job titles, technologies used, company structure, and more. In the past, this meant manual scraping or basic OSINT tools.
Now, AI does it better—and faster.

  • Language models are fed company names, and they output employee directories scraped from LinkedIn, GitHub, press releases, and team pages.
  • Facial recognition and voice scraping tools gather visual and audio samples from YouTube videos, podcasts, webinars, and conference recordings.
  • NLP algorithms analyze writing tone from social media posts to imitate communication styles—professional, casual, or friendly.
    A single AI-powered recon engine can map out:
  • The org chart of a company
  • Relationships between departments
  • Employee hobbies and behavioral traits
  • Historical projects and upcoming product launches
    Example: An attacker types:
    “Map out the leadership team at AcmeCorp, including emails and LinkedIn activity.”
    Within minutes, the AI can return a list of likely targets, including:
  • CTO who recently posted about hiring
  • CFO who spoke at a webinar
  • CISO who engaged with a cybersecurity awareness post

2. Attack Design – Prompt the Crime

Once the data is ready, the attacker moves to phase two: designing the attack. This is where prompt engineering meets social engineering.
Example prompt:
“Generate a phishing email from HR asking employees to verify their salary and bank details. Make it sound urgent but polite. Use a tone matching internal HR memos.”
The AI will:

  • Match the company’s language style using previously scraped emails
  • Insert fake links that mirror the internal payroll portal
  • Use timing-based tactics (“response required within 24 hours”)
    If the attacker adds:
    “Translate it to French and Arabic, but keep the tone and urgency intact.”
    The model complies, enabling multi-lingual phishing at scale.
    It doesn't stop at emails:
  • It can build phishing websites, matching the CSS and logos of internal tools
  • It can design malware delivery chains disguised as HR docs or invoices
  • It can even craft voicemail scripts or SMS smishing messages
    Key Feature: AI doesn’t just help you launch an attack—it helps you optimize it for psychology, timing, and believability.

3. Execution – Machines That Time the Shot

AI models also assist in when and how to launch attacks.

  • Email is sent at 9:17 AM, right after most morning meetings and coffee breaks.
  • Messages are scheduled to land during periods of known high activity—when users are most distracted.
  • Fake login pages are deployed temporarily using cloud functions, then destroyed after a short window to avoid detection.
    Some attackers are using AI bots to monitor mailboxes for replies and modify follow-ups in real-time. Example:
  • If a victim responds with “Is this verified?” — the AI can instantly generate a convincing response from a fake internal IT staffer.
    Adaptive Attacks: Some systems now use reinforcement learning to try variations:
  • Different subject lines
  • Varying CTA buttons
  • Adjusted tone based on open/click rates
    This isn’t spam. It’s targeted, intelligent engagement designed to maximize success rate.

4. Post-Attack Learning – AI That Evolves

What makes AI attacks so dangerous is their feedback loop.
After the initial wave:

  • Logs are analyzed automatically to see which emails were opened
  • Click-through rates, form fills, and endpoint infections are mapped
  • AI models adjust their strategy for the next campaign
    Just like ad tech optimizes campaigns based on engagement, AI-driven cybercrime adapts to maximize ROI.
    And it doesn’t stop there:
  • Stolen data is labeled and categorized, ready for resale or re-use in new campaigns
  • Credential reuse attacks are launched on other services automatically
  • AI generates reports for threat actors, summarizing what worked and why

Want to read more?

Discover more insights and tips in our blog section