Sleepy Pickle Invades Machine Learning 

Security threat, Sleepy pickle exploited the Pickle format to corrupt machine learning (ML) models, with major risks to supply chains. 

Security threat, Sleepy pickle exploited the Pickle format to corrupt machine learning (ML) models, with major risks to supply chains. 

Trail of Bits, a cybersecurity company explained that this new technique targets machine learning models directly instead of the systems they run on, making it a covert and a new type of attack. 

According to security researcher Boyan Milanov, “Sleepy Pickle is a stealthy and novel attack technique that targets the machine learning model itself rather than the underlying system.”  

What’s Pickle Format? 

Pickle is a format commonly used by machine learning libraries, such as PyTorch to save data. It has a security flaw that enables harmful code to run on a model when data is loaded.  

In documentation related to this issue, Hugging Face points out on the importance of preventing this threat and said, “We suggest loading models from users and organizations you trust, relying on signed commits, and/or loading models from [TensorFlow] or Jax formats with the from_tf=True auto-conversion mechanism.” 

How Does Sleepy Pickle Work? 

Pickle operates by adding a malicious payload into a pickle file using open-source tools, such as Flicking. Then, this file can be sent to a target using various methods, including adversary-in-the-middle (AitM) attacks, phishing, supply chain compromises, or exploiting system weaknesses. 

Adversary-in-the-middle (AitM) attack is a type of cyber-attack that allows the hacker to secretly intercept and potentially alter the communication between two parties. This type of attack is dangerous because it can go unnoticed. 

Milanov added, “When the file is deserialized on the victim’s system, the payload is executed and modifies the contained model in-place to insert backdoors, control outputs, or tamper with processed data before returning it to the user.” This means that the payload can change the model’s behavior by altering the weight, as well as the input and output data it processes. 

Potential Outcomes  

In a potential attack scenario, this technique could be used for the generation of harmful outputs or misinformation, leading to severe outcomes on the user’s safety, such as recommending dangerous actions like drinking bleach to treat the flu. It has also the ability to take users’ data under certain conditions, thereby attacking them indirectly by creating manipulated summaries of news articles with links to phishing pages. 

Trail of Bits highlighted that Sleepy Pickle can be used by malicious actors to maintain hidden access to machine learning models systems in a way to avoid detection, considering that the model is corrupted by the pickle file loaded in Python. 

Conversely, this method is said to be more effective than directly uploading malicious model to platforms, such as Hugging Face, given that it could dynamically change the behavior of the model without requiring targets to download and operate the malicious models. 


Inside Telecom provides you with an extensive list of content covering all aspects of the tech industry. Keep an eye on our Cybersecurity sections to stay informed and up-to-date with our daily articles.