
When Security Training Becomes a Security Risk
Why vulnerable apps do not belong in corporate environments
Security teams regularly use intentionally vulnerable applications to train staff, test scanners, and validate detection tooling. Projects like OWASP Juice Shop, DVWA, or custom vulnerable web apps are widely used for this purpose.
However, recent security incidents have shown how these same tools can become a real attack vector when deployed carelessly, even inside large and security-mature organizations.
In multiple reported cases, attackers successfully breached Fortune 500 companies after discovering exposed vulnerable security testing applications that were never meant to be accessible outside controlled training scenarios.
This post explains where the risk comes from, why internal only is not a sufficient control, and what safer alternatives look like.
Recent incidents: when training labs become entry points
According to recent security reports, threat actors have been actively scanning for intentionally vulnerable applications such as OWASP Juice Shop and DVWA.
These applications were found exposed to the internet or accessible from poorly segmented internal networks. Because they are designed to be exploitable, attackers were able to:
- Gain initial access with trivial exploits
- Deploy web shells or cryptominers
- Steal credentials and cloud access tokens
- Move laterally into production systems
Some of the affected organizations were Fortune 500 companies, including firms with dedicated security teams.
No advanced techniques or zero-day vulnerabilities were required. The failures were largely operational.
The problem: intentionally vulnerable software is still vulnerable
Vulnerable web applications are designed to be exploited. That is their value and also their danger.
When deployed in:
- Corporate cloud accounts
- Internal networks
- Environments with shared identity, credentials, or trust
they introduce real attack surfaces that attackers actively look for.
Security tools do not become safe simply because their purpose is training.
A practical example from real testing
In my own testing, I deployed intentionally vulnerable web applications on a standalone VPS specifically to evaluate the detection capabilities of popular vulnerability scanning vendors.
The setup was intentionally separated:
- One VPS hosted the vulnerable application
- A second VPS was used exclusively to launch scans
- No corporate infrastructure, credentials, or trusted networks were involved
Both VPS instances were destroyed immediately after testing.
Importantly, I did not manually browse or interact with the exposed application. Anyone working in security understands that the moment something is exposed to the public internet, it can take seconds to minutes before automated scanners and exploit attempts begin. Scanner traffic and attacker traffic often overlap, making manual interaction unnecessary and risky.
While this approach reduces risk when done carefully and briefly, it still highlights an important point: even short-lived exposure of vulnerable systems is never risk-free.
Why internal only is not a safety guarantee
A common assumption is that keeping vulnerable apps internal makes them safe. In practice, this assumption often fails.
Internal networks are routinely breached
Phishing, malware, compromised VPN credentials, and unmanaged endpoints mean attackers regularly gain internal access. Once inside, intentionally vulnerable apps are low-effort, high-reward targets.
Cloud environments amplify blast radius
In cloud environments, a compromised training app may have access to IAM roles, environment variables, API tokens, or metadata services. This can allow attackers to escalate from a training lab to full cloud account compromise.
Temporary setups often become permanent
Training labs are frequently deployed quickly, poorly documented, forgotten after use, and excluded from monitoring and patching. What was meant to be short-lived often is not.
Even short-lived public exposure is risky
Some practitioners deploy vulnerable apps on public VPS instances for brief testing, then destroy them. While this reduces risk, it does not eliminate it.
Internet-wide scanners can discover and probe vulnerable services within minutes. Destroying the instance afterward prevents persistence, but it cannot undo credential theft, data exfiltration, or reconnaissance that may have already occurred.
This approach may be acceptable for controlled individual experimentation, but it does not scale safely in organizational environments.
Better alternatives for security training
1. Use dedicated training platforms
Platforms such as TryHackMe, Hack The Box, and Offensive Security labs are purpose-built for security training.
They provide:
- Fully isolated environments
- Realistic attack scenarios
- Automatic resets
- No connection to corporate infrastructure
Most importantly, they introduce no blast radius to production systems.
2. Use sandboxes, not corporate laptops
Training and exploitation exercises should never be conducted on corporate laptops, machines connected to internal VPNs, or systems authenticated to production identity providers.
Instead, use:
- Dedicated virtual machines
- Disposable cloud accounts
- Personal lab machines
- Vendor-hosted cyber ranges
This limits exposure even if a system is compromised during training.
3. Treat vulnerable labs as hostile systems
If you must deploy vulnerable applications yourself:
- Isolate them completely
- Block outbound access where possible
- Avoid shared credentials and SSH keys
- Auto-destroy after use
- Assume compromise
In other words, treat them as if they are already breached, because eventually they will be.
The key takeaway
The issue is not that security teams train or test tools. That is essential.
The issue is where and how that training happens.
Recent Fortune 500 breaches demonstrate a simple truth:
Tools designed to be exploited should never live in environments that matter.
By using dedicated platforms, proper sandboxes, and strict isolation, organizations can build real security skills without accidentally creating their next incident.
