ChatGPT’s AI Memory Spyware Threatening macOS Again
A newly patched vulnerability in OpenAI’s ChatGPT’s AI memory for macOS could have allowed attackers to implant spyware to continuously exfiltrate user interactions and future chat sessions.
The so-called “SpAIware” vulnerability, discovered by security researcher Johann Rehberger, exploited the ChatGPT new memory feature to continuously exfiltrate user data, including future chat conversations and responses.
Malicious Injections for Data Exfiltration
The ChatGPT macOS memory security vulnerability focuses on a long term memory AI, introduced by OpenAI several months ago that allows ChatGPT to remember certain information from one chat session to another.
The memory feature was part of an effort to make the tool easier for people instead of repeating information in future conversations. Users can also manually clear certain memories.
Rehberger discovered a way to easily exploit ChatGPT’s AI with long term memory through indirect prompt injection, allowing malicious actors to potentially inject harmful commands into the system’s memory.
This vulnerability could compromise all subsequent conversations with the model, as the injected commands would persist, affecting future interactions.
“The malicious instructions are stored in ChatGPT’s memory, causing all subsequent conversations to be sent to the attacker,” Rehberger said.
In this type of attack, the user could be misled into visiting a malicious website. The website injects commands into ChatGPT’s AI memory leading to transferring of all future chat data to a server controlled by the attacker.
In this type of attack, the user could be tricked into visiting a malicious website, which then injects commands into ChatGPT’s AI memory, resulting in all future chat data being transferred to a server controlled by the attackers, threatening the privacy and security of the user’s conversations.
Since the injected instructions would persist across chat sessions, this type of attack could potentially continue indefinitely, placing users’ data at threat of being continuously exfiltrated by what essentially becomes ChatGPT spyware risk.
Vulnerability Fixed, But…
After the disclosure of the AI memory vulnerability, OpenAI addressed the issue with ChatGPT version 1.2024.247, effectively shutting down the data exfiltration pathway.
Rehberger also emphasized that one important user-side action was the manual cleanup of memories stored by ChatGPT to further protect against potential risks.
“Users should review their stored memories for suspicious or incorrect entries and clean them up regularly,” Rehberger said.
This incident shows the risks involved in AI chatbot with long term memory particularly regarding data privacy and security.
The discovery of the AI memory vulnerability comes at a time when there is growing concern about the security of AI systems.
The discovery of the AI memory vulnerability comes at a time when increasing concerns over the security of AI systems are on a perpetual rise.
Academics have conducted another study highlighting a new AI jailbreaking method, code-named, MathPrompt, which exploits the symbolic mathematics capabilities of large language models.
Designed to bypass AI safety mechanisms by using complex mathematical prompts to “trick” the system, the technique will allow users to evade restrictions intended to prevent harmful or prohibited outputs.
According to the findings from this new research, AI models generated unsafe responses 73.6% of the time when exposed to mathematically encoded harmful prompts, compared to only 1% with unmodified prompts.
Microsoft has introduced a new Correction feature within its Azure AI Content Safety platform, which goes beyond simply detecting inaccuracies – often referred to as “hallucinations” – in real time.
The feature actively corrects these errors before they reach users, enhancing the reliability and accuracy of AI-generated content.
Inside Telecom provides you with an extensive list of content covering all aspects of the tech industry. Keep an eye on our Intelligent Tech sections to stay informed and up-to-date with our daily articles.