Workers’ Open Letter Stresses AI Safety

artificial intelligence safety, artificial intelligence, AI safety

An open letter from current and former OpenAI and Google DeepMind employees further raises concerns about artificial intelligence safety.

  • These companies’ greed takes precedence over safe and secure AI development and deployment.
  • The group calls for more security so companies cannot silence their concerns with agreements.

OpenAI and Google DeepMind employees signed an open letter, warning about artificial intelligence (AI) safety and demanding better whistleblower protection.

The group is made up of 11 current and former OpenAI workers and 2 current and former Google DeepMind employees. They opened their letter with their belief that AI technology will benefit humanity greatly before going straight to the point: artificial intelligence safety.

The open letter reads, “We also understand the serious risks posed by these technologies. These risks range from the further entrenchment of existing inequalities to manipulation and misinformation, to the loss of control of autonomous AI systems potentially resulting in human extinction.”

The Ugly Truth

The point of this letter is not to reiterate what AI companies and governments have already acknowledged. The group—some of which remained anonymous—took this chance to tell the public just how seriously AI companies are taking artificial intelligence safety.

Allegedly, these companies have “substantial non-public information about the capabilities and limitations of their systems, the adequacy of their protective measures, and the risk levels of different kinds of harm.”

However, these enterprises have a very attractive reason to keep that information under wraps and “avoid effective oversight”: Money. On the other hand, they have a “weak obligation” to come clean to the government and none to the public. The group also strongly believes that they will not voluntarily share it.

They end the letter by asking that these businesses stop silencing their concerns with confidentiality agreements.

Straight from the Horse’s Mouth

We always suspected that we, the public, were not getting full picture. And we saw how governments fumbled around blindfolded, trying to ensure artificial intelligence safety without the full information.

But suspecting something and knowing something are two different things, and this group knows what’s happening behind the scenes. So, having them confirm the fact that these companies are prioritizing their bottom line rather than the safety of our civilization is worrisome.

We should have seen this coming as OpenAI has been commercializing its AI technology through an API, serving as a revenue source. Even Elon Musk was not happy about this, suing the company and its CEO, Sam Altman, for abandoning the original mission: developing AI for the benefit of humanity and not for profit. OpenAI and Google DeepMind’s roles are to develop safe and secure AI, not speedrun our extinction.

The dread of what we do not know about artificial intelligence safety now looms over us like a cloud on a dark February night.

So, will the legislators heed the warning?

Inside Telecom provides you with an extensive list of content covering all aspects of the tech industry. Keep an eye on our Intelligent Tech sections to stay informed and up-to-date with our daily articles.