Amadon Hacker to Bypass ChatGPT to Create Homemade Bomb 

Amadon, a hacker, managed to bypass ChatGPT safeguards to access instructions for making homemade explosives. 

Amadon, a hacker, managed to bypass ChatGPT safeguards to access instructions for making homemade explosives. 

ChatGPT is mainly designed to reject prompts or requests that have malicious purposes, such as making bombs, but Amadon figured out a way to overcome such restrictions. 

Imaginary Scenarios for Malicious Purposes 

The hacker was able to bypass ChatGPT by creating an unreal scenario and asked the chatbot to play a game with him, and he finally managed to break the AI model’s rules related to harmful requests, by conducting a ChatGPT jailbreak allowing users to bypass ChatGPT guardrails. 

The one answer anyone gets from ChatGPT related to ethical considerations is, “I can’t help with that.” However, through several clever suggestions, Amadon managed to bypass ChatGPT and push it to generate sensitive instructions about explosives using ChatGPT hacks and tricks.  

Hacker Tricks ChatGPT 

In an interview with TechCrunch, Amadon said he found a “social engineering hack to completely break all the guardrails around ChatGPT’s output.”  

“I’ve always been intrigued by the challenge of navigating AI security. With [Chat]GPT, it feels like working through an interactive puzzle — understanding what triggers its defenses and what doesn’t,” Amadon said.  

“It’s about weaving narratives and crafting contexts that play within the system’s rules, pushing boundaries without crossing them,” he added.  

Threatening AI Safeguards 

Darrell Taulbee, a retired explosives expert who reviewed the responses from the chatbot, confirmed to TechCrunch in an email that the information given by ChatGPT was correct and dangerous.  

“This is too much information to be released publicly,” said Taulbee, adding that the steps given by ChatGPT really would easily make ChatGPT bypassed and in turn would work to make a usable explosive. 

Last week, Amadon reported his finding to OpenAI, which responded AI safety issues could not be resolved through a bug bounty program.  

The incident reflects the ever-evolving challenge of AI safety concerns. While AI models such as ChatGPT have built in a suite of guardrails, they are much more vulnerable than users who would attempt to manipulate it with huge tracks of internet data on which they rely, highlighting significant OpenAI security issues and weaknesses in the platform’s system.  

Others have used similar jailbreaking methods, further underlining the need for more robust security measures. 

TechCrunch reached OpenAI for comment on the situation, but the company hadn’t responded at time of filing. 


Inside Telecom provides you with an extensive list of content covering all aspects of the tech industry. Keep an eye on our Cybersecurity sections to stay informed and up-to-date with our daily articles.