CyberLeveling Logo
AI, Deepfakes, and Custom Malware

AI, Deepfakes, and Custom Malware

The cryptocurrency industry has long been a high-value target for financially motivated threat actors. But recent investigations show something more concerning than another wallet exploit or phishing email. We are now seeing highly coordinated campaigns that blend social engineering, custom-built malware, and AI-assisted deception into a single operation.

This is not just another phishing attempt. It represents a more mature model of cybercrime, one that relies as much on psychological manipulation as it does on technical skill.

A Shift in Tactics

A threat group tracked as UNC1069 has been actively targeting cryptocurrency companies, developers, and venture capital firms. The group has been linked to North Korean operations and has been active for years. What stands out in this recent activity is not only who they are targeting, but how they are doing it.

Instead of relying solely on email lures or malicious attachments, the attackers combined compromised messaging accounts, staged video calls, and tailored macOS malware. The operation appears designed to extract as much valuable information as possible from a single individual.

It was precise, layered, and clearly intentional.

The Fake Zoom Meeting Case

One of the most striking elements of the campaign involved a fake Zoom meeting.

The intrusion began with a message on Telegram that appeared to come from a legitimate cryptocurrency executive. In reality, that executive’s account had already been compromised. Because the message came from a trusted industry figure, it immediately lowered suspicion.

After building rapport over multiple exchanges, the attacker sent a scheduling link for a video meeting. The link looked normal and professional. It directed the victim to what appeared to be a legitimate Zoom session.

The domain, however, was spoofed.

When the victim joined the meeting, they reportedly saw what appeared to be a well-known CEO from another cryptocurrency company on video. The video feed looked authentic. It mimicked a real business conversation. The audio, however, was not working properly.

This is where the manipulation escalated.

The “participants” in the meeting suggested that the victim run a few troubleshooting commands to resolve the audio issue. The instructions looked technical but harmless, consistent with what someone might try when debugging sound problems.

Hidden inside those commands was a malicious download-and-execute step.

The victim believed they were fixing a microphone issue. Instead, they triggered the infection chain.

Although forensic evidence could not conclusively prove that the video feed was AI-generated, the characteristics strongly resembled reported deepfake-enabled scams. Whether the video was fully synthetic or carefully edited, the objective was clear: reinforce legitimacy, create urgency, and build enough credibility to convince the victim to act.

This case highlights something important. Social engineering no longer relies only on text. It can now include convincing live video deception.

The Multi-Stage Infection Chain

Once the malicious command was executed, a structured sequence of malware components was deployed.

The first payload established a foothold and collected system details. A downloader component then contacted remote servers to retrieve additional tools. From there, multiple specialized malware families were introduced to expand access and harvest data.

Investigators ultimately identified seven distinct malware families involved in the operation. Several had not been publicly documented before.

This was not opportunistic malware. It was modular and purpose-built.

Targeting macOS Privacy Controls

One of the more sophisticated tools focused on bypassing macOS privacy protections.

Apple’s Transparency, Consent, and Control framework is meant to restrict applications from accessing sensitive resources such as documents, keychains, and messaging databases without user approval. The attackers deployed a tool that directly manipulated this system, effectively granting itself broad permissions without triggering obvious prompts.

With elevated access, the malware harvested:

  • Stored credentials from the user’s keychain
  • Browser login data and cookies
  • Data from Telegram
  • Local notes and user files

By modifying the underlying permissions database, the malware avoided drawing attention through standard security prompts.

For defenders, this reinforces a critical point: operating system protections are powerful, but they are not invulnerable if an attacker gains local execution.

Browser-Level Surveillance

Another component installed itself within Chromium-based browsers such as Chrome and Brave. It disguised itself as a legitimate extension associated with offline document editing.

Behind the scenes, it was capable of:

  • Capturing keystrokes
  • Recording username and password entries
  • Extracting browser cookies
  • Staging data for exfiltration

Session cookies are particularly dangerous in cryptocurrency environments. They can allow attackers to hijack active sessions without needing credentials or multi-factor codes.

In practical terms, that can translate directly into unauthorized transactions.

Persistence for Long-Term Access

The attackers did not stop at data theft. They implemented persistence mechanisms to ensure their malware would execute automatically on system startup.

This allowed continued access even after reboots. It also provided a stable platform for deploying additional tools if needed.

The number of components deployed on a single host suggests the attackers were aiming for maximum data extraction rather than minimal access.

Why This Matters for Crypto and Web3

The crypto ecosystem presents a uniquely attractive target.

Developers often have access to repositories, wallet infrastructure, and private keys. Executives may control exchange accounts or treasury functions. Venture capital professionals hold insight into deal flow and confidential communications.

Compromising one individual can create ripple effects across multiple organizations.

The addition of realistic video deception increases the attack surface. In industries where remote meetings are routine and partnerships move quickly, trust can be exploited with remarkable efficiency.

The Growing Role of AI in Social Engineering

Even without confirmed forensic proof of a deepfake in this specific case, the pattern aligns with broader industry trends. Threat actors are increasingly experimenting with generative AI to improve the realism of their lures.

This can include:

  • Editing executive images and branding
  • Creating convincing fake identities
  • Generating polished communication
  • Potentially simulating live video participants

As the technology improves, visual verification alone will no longer be sufficient.

Organizations will need stronger identity validation processes for high-risk conversations, especially those involving financial decisions or technical instructions.

Defensive Takeaways

There are clear lessons from this incident.

  • Never execute command-line instructions provided during an unexpected support interaction, especially within a live meeting.
  • Scrutinize meeting URLs carefully. Minor domain variations are a common tactic.
  • Implement endpoint monitoring on macOS systems handling financial or development work.
  • Restrict and audit browser extensions regularly.
  • Assume that attackers may use convincing multimedia deception as part of their strategy.

Most importantly, build a culture where pausing to verify is encouraged rather than penalized.

The Bigger Picture

This campaign demonstrates how modern cyber operations combine technical exploitation with psychological precision. The malware was advanced. The infrastructure was organized. But the entry point was human trust, amplified by a convincing video meeting.

As AI tools become more accessible, social engineering will continue to evolve beyond email scams and simple impersonation.

Security teams, especially within the cryptocurrency sector, should prepare for attacks that look and feel legitimate at every stage.

Research Reference

This article is based on publicly available threat intelligence research published by Mandiant regarding UNC1069’s activity targeting the cryptocurrency sector. For detailed technical analysis, indicators of compromise, and detection guidance, readers are encouraged to review the original research from Mandiant.

Disclaimer

This article is intended for informational and educational purposes only. It summarizes publicly available threat intelligence research and does not disclose sensitive investigative details or proprietary findings.

While the case discussed involves the cryptocurrency sector, the tactics described are not limited to crypto organizations. Social engineering through compromised accounts, spoofed meeting platforms, AI-enhanced impersonation, and multi-stage malware deployment are techniques that can be applied across any industry. Financial services, technology firms, healthcare organizations, government entities, and even small businesses may face similar methods.

The purpose of this article is to raise awareness of evolving threat actor behavior and encourage stronger security practices across all sectors. Readers are encouraged to consult original research sources and implement appropriate security controls relevant to their environment.

https://cloud.google.com/blog/topics/threat-intelligence/unc1069-targets-cryptocurrency-ai-social-engineering