Hacker Breached OpenAI’s Internal Messaging Systems

internal messaging systems, openai, cybersecurity, breach

It has recently come to light that, last year, OpenAI experienced a serious security breach when a hacker intruded on its internal messaging systems.

Many employees have expressed their worries regarding the potential for foreign entities, especially China, to steal AI technology, posing future national security risks.

What They Never Disclosed

In early 2023, OpenAI’s internal messaging systems were compromised. The hacker did not penetrate the core system, where the team develops and stores the AI models. However, he managed to extract details about their design.

In April 2023, during an all-hands meeting, OpenAI executives revealed the breach. Their reasons for not disclosing the breach to the public had to do with their beliefs that it posed no risk to customer data or national security. They dismissed the hacker as a private individual lacking ties with foreign governments.

As a result, authorities were not notified that OpenAI’s internal messaging systems were compromised.

Not Everyone

Their dismissal did not assuage their own employees’ worries about foreign theft through the internal messaging systems. One of these employees, Leopold Aschenbrenner, who was, at the time, a technical program manager at OpenAI, voiced his concerns and criticized the company’s security measures in a memo to the board. The company later terminated him, accusing him of leaking information.

An OpenAI spokesperson, Liz Bourgeois, disagreed with Aschenbrenner’s claim that his termination was politically motivated. “While we share his commitment to building safe [artificial general intelligence], we disagree with many of the claims he has since made about our work,” she said. “This includes his characterizations of our security, notably this incident, which we addressed and shared with our board before he joined the company.”

Reasons for Concern

The breach of OpenAI’s internal messaging systems comes at a time when regulators are heavily questioning AI security.  China is rapidly advancing in AI, producing nearly half of the world’s top AI researchers. This competition underscores the need for robust security measures and international collaboration to navigate the evolving AI landscape.

In response to security concerns, OpenAI has established a Safety and Security Committee, including former NSA Director Paul Nakasone, to address future AI risks. Federal and state regulators are also considering laws to restrict certain AI technologies and impose penalties for harm caused by AI.

Inside Telecom provides you with an extensive list of content covering all aspects of the tech industry. Keep an eye on our Cybersecurity sections to stay informed and up-to-date with our daily articles.