Microsoft discovered that a certain prompt can jailbreak AI models, making people wonder why to push AI before it’s ready.
Jailbreak
Robust Intelligence, a startup dedicated to developing AI model security to protect them from attacks. It discovers "jail break" prompts.