
When AI Becomes a Force Multiplier for Cybercrime: Lessons from the FortiGate Campaign
Feb 21, 2026
Over the past few years, there has been a lot of hype about “AI-powered hacking.” This case is more grounded than that, and in some ways, more concerning.
A relatively low-skill, financially motivated attacker used commercial AI tools to run a large, global campaign against FortiGate firewall devices. They did not discover a zero-day. They did not develop custom exploits. They did not break the software.
They simply found FortiGate management portals exposed to the internet, tried common passwords, and logged in where basic security hygiene was weak. The techniques were old. The scale was new.
AI did not make the attacker sophisticated. It made them efficient.
What Actually Happened
Between January 11 and February 18, 2026, the actor accessed over 600 FortiGate devices across more than 55 countries. There was no exploitation of FortiGate vulnerabilities. Instead, the attacker:
- Scanned the internet for exposed management ports (443, 8443, 10443, 4443)
- Attempted authentication using common or reused credentials
- Gained access where single-factor authentication was in place
- Downloaded full device configuration files
Those configuration files are extremely valuable. They can contain VPN usernames, recoverable passwords, network topology, and internal architecture details. Once the attacker had that information, they pivoted into victim networks.
What They Did After Getting In
After gaining VPN access, the attacker followed a standard ransomware playbook:
- Mapped internal networks and identified domain controllers
- Performed DCSync attacks to dump Active Directory password hashes
- Targeted backup servers, especially Veeam, to destroy recovery options
In several cases, the actor successfully extracted entire domain credential databases. However, whenever they encountered hardened environments with MFA, segmentation, or proper patching, they failed and moved on.
Where AI Comes In
This is where things get interesting. Investigators found that the attacker had stored:
- AI-generated attack plans and playbooks
- AI-written scripts in Go and Python
- Internal victim network data submitted to LLMs for guidance
The code showed clear signs of AI assistance: excessive comments restating obvious function names, simplistic architecture, and poor handling of edge cases. AI acted as a force multiplier, allowing a single individual to generate tooling and plans in hours that would normally require a whole team.
What This Campaign Tells Us
1. Fundamentals Still Matter
This campaign succeeded because management interfaces were exposed and credentials were weak. These are basic security gaps that should not exist in 2026.
2. AI Lowers the Barrier to Entry
You no longer need deep exploit development skills to conduct large-scale operations. AI can generate the scripts and structure the attack paths for you.
3. Behavior-Based Detection Is Critical
The attacker relied on legitimate tools. Traditional indicator-based detection is less reliable than monitoring for abnormal behavior like unusual VPN access or AD replication.
4. Security Maturity Still Wins
Where organizations had MFA, proper patching, and hardened backups, the attacker failed. Good fundamentals still stop bad actors.
