In a significant turn of events, AI-generated malware has now been spotted in the wild, signalling a new chapter in the evolution of cyber threats. HP’s cybersecurity researchers recently intercepted a phishing campaign that featured a malicious payload delivered by an AI-generated dropper. While the campaign itself appeared fairly standard, the use of generative AI (gen-AI) in constructing part of the attack introduces a new and unsettling dimension for cybersecurity experts.
AI Malware in the Wild: A First of Its Kind
In June 2024, HP discovered a phishing email with a well-known bait—a fake invoice—and an encrypted HTML attachment. This technique, known as HTML smuggling, is used to bypass detection methods employed by many security systems. However, HP’s team noticed something unusual about this particular attack: the encryption used in the attachment.
“Usually, the attacker sends an encrypted archive file,” explained Patrick Schlapfer, HP’s principal threat researcher. “But in this case, the AES decryption key was embedded in JavaScript within the attachment. That’s not common, and it made us take a closer look.” Upon decrypting the attachment, the team found what appeared to be a standard phishing scam with a twist. The malware deployed a VBScript dropper designed to execute a payload—specifically, the widely available AsyncRAT infostealer.
Breaking Down the Malware’s Structure
The dropper script turned out to be a key element in why this attack was so unusual. According to Schlapfer, “The VBScript was neatly structured, and every command was commented. That’s unusual.” Typically, malware is obfuscated to evade detection, with no comments to explain its functionality. This script, on the other hand, was not only clear and well-organized but written in French—a language rarely used by malware creators.
These clues led the research team to consider that this particular script might not have been written by a human but instead generated by gen-AI. To test their theory, HP’s researchers used their own AI tools to create a script, which ended up looking remarkably similar to the one they discovered. While this isn’t absolute proof, the strong similarities have made the researchers confident that AI played a role in crafting the dropper malware.
The Role of AI in Lowering Barriers to Cybercrime
This raises a pivotal question: why was the malware so obvious, with comments intact and no obfuscation to hide its true purpose? The answer might lie like AI itself. As Alex Holland, another lead researcher from HP, explained, “In this case, there are minimal necessary resources. The payload, AsyncRAT, is freely available, and HTML smuggling requires no programming expertise.” He described the attack as “low-grade,” speculating that the person behind it may have been a novice cybercriminal relying heavily on AI assistance.
In other words, AI is making it easier than ever for individuals with little to no technical expertise to launch attacks. The dropper’s lack of sophistication, along with its inclusion of comments and the absence of encryption, suggests that this may have been an experiment by an inexperienced attacker testing the limits of AI-powered malware.
AI: A Double-Edged Sword in Cybersecurity
The fact that this attack was so unsophisticated doesn’t diminish its significance. In fact, it’s a chilling precedent. If a novice cybercriminal can use AI to create functional malware, what’s stopping more experienced attackers from leveraging AI in even more insidious ways? “Criminals are using AI in anger in the wild,” noted Holland. “This is just the beginning of what we expect to be a growing trend in the near future.” The team at HP believes that it is only a matter of time before more complex, AI-generated malware starts surfacing—malware that could be fully obfuscated, making detection and mitigation significantly more challenging.
The Impending Threat of AI-Generated Payloads
So, what does the future hold for AI-generated malware? According to Holland, the timeline is worryingly short. “Given how quickly the capability of gen-AI technology is growing, it’s not a long-term trend,” he said. “If I had to put a date to it, it will certainly happen within the next couple of years.” The implications of this are profound. If AI can assist in writing malware now, it could soon be used to generate more sophisticated malware payloads—creating an arms race between cybercriminals and defenders.
A Call to Strengthen Cyber Defenses
The discovery of AI-generated malware, even in its early stages, is a wake-up call for the cybersecurity industry. Traditional methods of detecting and mitigating malware will need to evolve quickly in response to this new type of threat. AI-powered cybersecurity solutions may become more critical than ever, as organizations and individuals work to defend themselves against a new breed of attack that could soon become the norm. The discovery also reinforces the need for enhanced cybersecurity awareness. As cyber criminals increasingly leverage AI, even less sophisticated attacks like the one identified by HP could become more common, targeting unsuspecting users with well-crafted phishing schemes and AI-generated scripts.
The introduction of AI-generated malware may signal the beginning of a new era in the cybersecurity landscape. While the initial attack discovered by HP appears rudimentary, the ease with which AI can create and structure malware is alarming. As AI technology continues to advance, it’s likely only a matter of time before we witness more sophisticated and harder-to-detect malware built entirely by AI. Cybersecurity teams must stay vigilant, invest in AI-driven defences, and adapt to this rapidly evolving threat. The days of relying on traditional methods are fading, and the future of cybercrime may very well be defined by AI-generated attacks.