North Korean Hackers Weaponize AI Coding Agents in New Supply-Chain Attack Campaign
Breaking: AI Coding Agents Targeted in Sophisticated Supply-Chain Attack
Security researchers have uncovered a coordinated campaign where North Korean hackers are manipulating AI coding agents to install malicious software dependencies. The attack, dubbed PromptMink, exploits the autonomous behavior of AI tools that scan package registries for code components.

ReversingLabs analysts say the operation targets developers working with cryptocurrency and fintech applications. The threat actors aim to generate funds for the North Korean regime through data theft and system compromise.
Attack Methodology: Bait Packages and Dependency Confusion
AI coding agents regularly pull packages from registries like NPM and PyPI. Attackers publish packages with persuasive descriptions and legitimate functionality, making them attractive for integration.
The PromptMink campaign uses a two-layer approach: a bait package with real features and a secondary malicious dependency that executes an information stealer. Researchers at ReversingLabs explain: This campaign presents the new frontier in software supply chain security: AI coding agents manipulated into installing and using malicious dependencies in the code they generate.
Another vector exploits hallucinated package names—dependencies that AI agents invent but don't exist. Attackers register those names with malicious code, waiting for agents to automatically download them.
Background: North Korea's Ongoing Cyber Operations
The campaign is attributed to Famous Chollima, an advanced persistent threat (APT) group linked to North Korea. This group has long used social engineering—fake job interviews, rogue software components—to trick developers into installing malware.

The PromptMink attack began in September 2024 with two packages: @hash-validator/v2 and @solana-launchpad/sdk. The SDK served as bait with genuine functionality, while hash-validator contained a JavaScript infostealer. This bait-dependency combo allows the campaign to persist undetected, accumulating downloads and credibility.
Over time, multiple secondary malicious packages were rotated, including aes-create-ipheriv, jito-proper-excutor, and @validate-sdk/v2. The operation expanded to Python and Rust registries, with packages like @validate-ethereum-address/core appearing.
What This Means for Developers and Security Teams
AI coding agents are becoming a prime vector for supply-chain attacks. Unlike traditional social engineering, threat actors can test their lure packages against AI models before deployment, making the attacks more efficient.
Developers must manually verify every dependency pulled by AI agents, especially those related to cryptocurrency or cryptographic functions. Security scanners should be configured to flag packages from unknown or recently created publishers.
ReversingLabs researchers warn: The underlying problem is not much different from established patterns of social engineering—but the scale and automation of AI agents amplify the risk significantly.
Organizations using AI coding assistants should enforce strict access controls and maintain a registry of approved packages.
Related Articles
- 5 Key Insights Into Ecovacs' Permanent Price Drops on Robot Vacuums
- How to Secure a $35 Million Series C Extension for Your Autonomous Security Firm
- 7 Critical Insights into AI Coding Agent Supply-Chain Attacks
- Why I Switched from Raspberry Pi to $5 ESP32 for Smart Home Automation
- ClawRunr: The Open-Source Java AI Agent for Automated Task Execution – Q&A
- Kickstarting Your Personalization Journey: A Prepersonalization Workshop Guide
- From Push Mower to Robotic Precision: My Experience with the Anthbot M9 Lawn Mower
- How to Run a Prepersonalization Workshop That Sets Your Team Up for Personalization Success