OpenAI Is Worried About Super Smart AI, Investing $10 Million to Keep It in Check

AI Risk Management

OpenAI seems to be addressing challenges with AI systems and is seeking solutions. They’re exploring ways to make the ‘super-smart’ AI behave properly. In other words, they are using a less advanced AI to guide the more intelligent one, including AI risk management.

So that happened…

OpenAI is allocating $10 million to support research in this area. They aim to develop tools to ‘control’ it. OpenAI holds the view that Artificial Intelligence (AI) surpassing human intelligence capabilities is not merely speculative but likely an imminent development. Therefore, the primary focus of the research is on the potential risks and benefits associated with advanced AI.

“AI progress recently has been extraordinarily rapid, and I can assure you that it’s not slowing down.” Said Aschenbrenner one of the Super alignment team members who were in New Orleans at NeurIPS. The annual machine learning conference, to present OpenAI’s newest work on ensuring that AI systems behave as intended.

Everyone can observe that this technological advancement is not showing any signs of stopping. This fact alone is causing concern for OpenAI, making it challenging for them to invest in research.

“I think we’re going to reach human-level systems pretty soon, but it won’t stop there — we’re going to go right through to superhuman systems … So how do we align superhuman AI systems and make them safe? It’s really a problem for all of humanity — perhaps the most important unsolved technical problem of our time,” Aschenbrenner added.

AI Risk Management

From this point, OpenAI is focusing on developing control tools to identify the potential dangers of the highly advanced ‘superhuman’ AI. They aim to be hands-on in this development to oversee and mitigate those risks. To achieve this, they are employing several techniques such as:

  1. Alignment research: This research is dedicated to exploring ways for AI systems to align with human values and goals. The aim is to prevent these systems from pursuing objectives that conflict with our own.
  2. Safe AI Design: This involves creating AI systems with a safety-first approach, incorporating precautions to prevent them from causing harm or acting beyond their intended purposes.
  3. Control Frameworks for ‘Superintelligent’ AI: These are guidelines aimed at establishing ethical and regulatory frameworks to govern the development and utilization of powerful AI systems.

For OpenAI to announce that it’s developing tools to control ‘superhuman’ AI is a bold statement. It underscores the urgency to manage such dominant technology and addresses the potential risks associated with the misuse of control mechanisms.

The concept of ‘superhuman’ AI is vague and complex, posing significant challenges in effectively managing it and addressing the many unanswered questions it brings.


Inside Telecom provides you with an extensive list of content covering all aspects of the tech industry. Keep an eye on our Intelligent Tech sections to stay informed and up-to-date with our daily articles.