The Emergence of Generative AI Worms for Cybersecurity 

The field of artificial intelligence (AI) has significantly advanced in recent years, introducing new applications and use cases.

The field of artificial intelligence (AI) has significantly advanced in recent years, introducing new applications and use cases. Systems like OpenAI’s “ChatGPT” and Google’s “Gemini” have become indispensable across various sectors.  

To adapt, companies are developing AI to automate tasks from scheduling to shopping. However, these advancements bring new cybersecurity threats, notably AI-powered malware that can autonomously spread, posing significant risks. 

AI Worm ‘Morris2’ Posing Significant Cybersecurity Threats 

In the rapidly evolving field of Aritificial Intelligence landscape, a team of researchers has developed what they claim to be among the first “generative AI worms.” These worms can spread from one system to another, potentially stealing data or disseminating malware. 

Named “Morris2,” after the notorious Morris computer worm from 1988, these worms are AI-powered malicious programs capable of learning, self-replicating, adapting to environments, and creating new malware. 

The research, led by Ben Nassi from Cornell Tech, along with Stav Cohen and Ron Bitton, highlighted in a detailed paper and showcased on WIRED demonstrated how these AI worms can infiltrate AI-powered email assistants to steal data from emails and send spam messages, bypassing security measures in platforms like ChatGPT and Gemini. 

While these AI worms have not yet been seen in real-world systems, researchers warn of the significant security risk they pose, requiring attention from startups, developers, and tech companies. 

Usually, Generative AI systems typically operate by responding to prompts, which are text instructions that prompt the system to answer a query or create content. However, these prompts can be manipulated to disrupt system operations. 

Nevertheless, one of the most dangerous aspects is that security breaches can force the system to ignore security protocols, leading to the production of harmful content, incitement to hatred, or the use of a cyber-attack known as an injection attack, where instructions are surreptitiously directed to the chatbot program. 

Speaking of the engineering of the field of Artificial Intelligence worms, researchers used the so-called adversarial self-replicating prompt.  

This causes the generative AI model to generate an additional prompt in its reply, effectively instructing the AI system to generate a sequence of follow-up instructions in its responses. This makes the impression that the attack is nested within another attack, like SQL injection and cache poisoning attacks. 

To clarify how these worms’ function, researchers developed an email system capable of bidirectional communication using generative AI, integrating with ChatGPT, Gemini, and Open Source LLM LLaVA. 

On the other hand, researchers have discovered a method to deceive highly intelligent email assistants into executing harmful actions. 

 By sending these assistants a specialized form of communication called a “malicious text command,” akin to a hidden code, they disrupt the assistant’s normal behavior. This command prompts the assistant to act strangely and perform tasks outside its usual scope. 

Furthermore, the team have found a way to compromise email assistants using retrieval-augmented generation (RAG), causing them to leak sensitive information like credit card numbers instead of providing helpful responses. 

In an alternative approach, the researchers demonstrated that embedding malicious commands within an image can prompt an AI-powered email assistant to forward the email to more recipients.  

Nassi elaborated that by embedding self-replicating commands within images, it’s possible to automate the redistribution of content, including spam or harmful material, to new targets following the initial email’s dispatch. This technique exploits the AI’s ability to process and act upon embedded instructions without human intervention, highlighting a sophisticated method of expanding the reach of malicious campaigns through seemingly benign images. 

AI Worms Threat for Gemini and ChatGPT 

While the researchers succeeded in bypassing certain security protocols in ChatGPT and Gemini, their primary aim was to highlight vulnerabilities in the AI systems’ architecture. By exposing these weaknesses, they hoped to shed light on potential security risks. Promptly after discovering these issues, the findings were shared with both Google and OpenAI to help improve the systems’ security measures and architecture. 

An OpenAI spokesperson recognized the discovery of methods to exploit security weaknesses via unchecked user inputs, emphasizing the organization’s effort to enhance its systems’ adaptability. They also recommended that developers implement strategies to filter out harmful inputs, as highlighted in a report by Wired. 

Meanwhile, Google has not commented on research indicating an imminent threat from generative AI worms, despite interest in a discussion. 

Security research in Singapore and China demonstrated vulnerabilities in large language model applications, with the capability to breach a million users’ security swiftly. 

Sahar Abdelnabi, a researcher at the Helmholtz Center for Information Security (CISPA) in Germany, who contributed to initial demonstrations of rapid injection against large language models in May 2023, highlighted the feasibility of such AI worms spreading through data from external sources or independent operation.  

Nassi and his team foresee these worms becoming a real-world threat in the next 2-3 years due to the rapid development in the field of Artificial Intelligence systems across various industries. To mitigate these risks, AI developers are urged to enhance their security measures against potential mobile malware through both traditional and innovative methods. 


Inside Telecom provides you with an extensive list of content covering all aspects of the tech industry. Keep an eye on our Intelligent Tech sections to stay informed and up-to-date with our daily articles.