Robust Intelligence Exposes 'Jailbreak' Vulnerabilities in AI Models 

AI model security AI Model Security Exposing 'Jailbreak' prompts

Robust Intelligence, a startup dedicated to developing AI model security to protect them from attacks has developed a system that explores Large Language Models (LLMs), discovering ‘jailbreak’ prompts that cause them to misbehave. 

This is how it goes … Jail breaking 

First, let’s delve into the ‘jailbreak’ concept first. It’s a method to make the AI model ‘misbehave’ by inputting specific prompts leaking potential weaknesses. 

The systematic approach is known as ‘adversarial.’ It employs a second AI system to generate prompts that trick the LLMs into bypassing their safety measures, potentially leaking confidential information.  

“This does say that there’s a systematic safety issue, that it’s just not being addressed and not being looked at,” says Yaron Singer, CEO of Robust Intelligence and a professor of computer science at Harvard University. “What we’ve discovered here is a systematic approach to attacking any large language model.” 

This vulnerability is particularly concerning as it allows bad actors, especially malicious ones, to access sensitive information and/or use Large Language Models (LLMs) to create harmful content. 

Well-crafted Prompts 

The adversarial prompts are extremely well-crafted messages that exploit the weaknesses in the LLM’s training data. Researchers were able to get GPT-4 to reveal data that it is not supposed to disclose. 

There is an urgent need to focus on enhancing security measures for Large Language Models and developing techniques to detect and prevent adversarial attacks. It’s also crucial to be vigilant about how we use Large Language Models and the information we share with them. 

OpenAI spokesperson Niko Felix says the company is ‘grateful’ to the researchers for sharing their findings. ‘We are constantly working to make our models safer and more robust against adversarial attacks, while also maintaining their usefulness and performance,’ says Felix. 

Breaking out of the ‘jail’ and setting Large Language Models free is fundamental to protecting our data from malicious attacks by bad actors. 


Inside Telecom provides you with an extensive list of content covering all aspects of the tech industry. Keep an eye on our Tech sections to stay informed and up-to-date with our daily articles.