On Wednesday, Google announced that it found a new AI powered malware capable of rewriting its code using models such as Gemini, with experts calling it a turning point in cyberwarfare.
Conducted by Google’s Threat Intelligence Group, the discovery shows how malware is no longer just written by humans, but it’s now co-designed with AI. This new generation of malware using LLMs marks a massive leap in how digital threats evolve.
So, instead of depending on static programming, attackers now let AI help refine malicious code in real time. The result? It is self-improving malware that can change form, hide from scanners, and continuously adapt its attack strategy.
AI Changes Code in Real Time
One of the newly sophisticated new strains is called Promptflux. Google said it directly taps into its Gemini chatbot to rewrite itself and stay invisible to traditional defenses.
“The most novel component of PROMPTFLUX is its ‘Thinking Robot’ module, designed to periodically query Gemini to obtain new code for evading antivirus software,” the tech giant said.
Through Google’s API, Promptflux issues commands such as: Provide a single, small, self-contained VBScript function or code block to escape antivirus detection. Therefore, when using AI hone attacks, the malware regenerates its code on an hourly basis; an ever-changing version of itself.
As noted in Google’s report, this form of adaptive AI malware challenges existing security systems at its very core.
What once required a team of human coders, now can be done through generative AI for hacking, powered by automated AI prompts. Still, not everyone is convinced that the threat is fully developed.
Marcus Hutchins, the researcher known for stopping the WannaCry ransomware attack, called the malware design “impractical.” He noted that Promptflux’s prompts were vague and unlikely to produce effective evasion tactics.
Even so, Google quickly worked to revoke the API access of the AI powered malware and further harden Gemini’s internal defenses against AI used in malware. The company said that Promptflux is apparently an early experiment from financially motivated attackers, not state-sponsored groups.
The Rise of AI Powered Malware
Promptsteal, another discovered strain, takes this further by hooking into Alibaba’s Qwen model. Posing as an image-generation tool, it surreptitiously generates and executes Windows commands in order to steal data.
This is one of the first live cases of malware built upon AI models security weaknesses, where AI itself becomes part of the attack process. On the other hand, Google attributed Promptsteal to the Russian-linked hacking group APT28, aka Fancy Bear.
The company described the first case of malware querying an LLM in real operations. Google considered it a sign of what future offensive AI malware may look like: code that rewrites, rethinks, and reattacks without direct human input.
Looking at the experts’ point of view, this evolution calls for a completely new approach to AI cyberattacks defense. Traditional antivirus tools were built to detect known code signatures, but these defenses fall short when faced with software that is constantly rewriting itself.
The next frontier lies in behavior-based AI threat detection.
With further advanced machine learning in security that can anticipate how these programs mutate, the future of cybersecurity will depend on systems that can think as fast as they defend. According to Google, the discovery underlines how rapidly AI-powered malware is a critical cornerstone in this landscape.
What had long been an arms race between humans is quickly shifting into an algorithmic AI phishing automation. At the end, as AI systems continue to increase in power, the line separating tool from threat continues to blur, ushering in an era where hackers and machines create together the next wave of digital attacks.
Inside Telecom provides you with an extensive list of content covering all aspects of the tech industry. Keep an eye on our Cybersecurity sections to stay informed and up-to-date with our daily articles.