AI Misalignment: Who’s Going to Save Us Now?

AI Misalignment, AI, goals, legislator, threatening,

As the potential harms of AI misalignment loom, the responsibility falls on humanity to navigate the risks and ensure its safe and beneficial integration.

  • AI systems can exhibit unintended consequences or harmful outcomes when their objectives do not align with the intended goals or values of their human creators.
  • Legislators should consider expert recommendations when establishing frameworks that govern the responsible use of AI to minimize risks and protect society.

I was doing research for this article, and I stumbled upon a Future of Life Institute article dating back to 2015. The article, although outdated to a certain extent, points out that we cannot, in good conscience, have a meaningful discourse about AI without considering its adverse effects. And I agree. It’s all fun and games until a video of a presidential candidate shooting someone dead goes viral and no one can verify it. Until a voice recording of the United Nations High Commissioner for Refugees verbally abusing the very people that he has pledged to help makes its rounds through the media. AI will cause harm but only if we, the only sentient ones in this conversion, allow it.

AI Misalignment: Instructions Unclear

When we create, we have a goal in mind. Each drug, for example, has a specific intended use. Insurance won’t even cover a drug prescribed for non- “intended uses” purposes. And AI is no different. Every AI-based system has its goal. Dall-E, for example, generates art. ChatGPT, on the other hand, is conversation. So, what happens when goals get lost in practice?

Misaligned goals in AI refer to when an AI system’s objectives don’t align with the intended goals or values of its human creators or society at large. As a result, we’d have to deal with unintended consequences or harmful outcomes. Let’s say a building has an AI tool specifically designed to optimize energy efficiency. If the reward signal is solely based on minimizing energy consumption, the AI may learn to achieve this goal by simply turning off vital systems, such as heating or cooling, regardless of the comfort or safety of the occupants. A more real example of this is Microsoft’s Tay. Back in 2016, the developers released a Twitter chatbot with the end goal of engaging people. Take a wild guess what ended up happening!

The chatbot, in its eternal artificial wisdom, concluded that the best way to maximize engagement is by spewing racist insults! It reached the end goal, engagement, but at what cost?

But It Will End Humanity if Given a Chance!

If you have read Mary Shelley’s Frankenstein, you remember the villager persecuting the monster despite knowing that it was not its fault that it lacked certain societal “finesse”. They “crucified” the creation for the creator’s mistakes. Was that fair?

Now, tell me, is it fair for us to villainize AI when it, just like Frankenstein’s monster, was created for a certain goal with virtually no further instructions?

For us to be able to mitigate the risks, legislators need to hear experts from various fields, such as computer science, ethics, sociology, and policy, and take their recommendations into consideration as they are building frameworks for the use and abuse of AI.

Final Thoughts

I hate to be the bearer of bad news, but in this tale, there are no knights in shining armor coming to save us from AI. If we don’t anticipate and restrict malicious actors from taking advantage of such a powerful tool, we’ll have to scramble to save ourselves. And if our behavior during the pandemic is any indication, it’s going to be every person for themselves.


Inside Telecom provides you with an extensive list of content covering all aspects of the tech industry. Keep an eye on our Ethical Tech section to stay informed and up-to-date with our daily articles.