Generative AI War: Peace Was Apparently Never an Option

Generative AI War

After Elon Musk criticized OpenAI’s filters that prevent generative AI from producing toxic content, he announced a generative AI war on Microsoft-backed ChatGPT and his own project, “TruthGPT.”

  • ChatGPT works by creating text based on patterns learned from extensive data sets that include online content, and the filters are designed to prevent the spread of harmful content.
  • Ethically speaking, people have the right to express themselves, but not to the detriment of others, which is why limitations must be introduced.

After Elon Musk called for the pause of artificial intelligence (AI) development earlier this month, he came out of the woodwork declaring a generative AI war on Microsoft-backed ChatGPT. Musk is complaining that OpenAI’s developers have introduced filters that stop the generative AI from spewing hate and toxicity. As a result, he shared that he’s working on “TruthGPT”, a “maximum truth-seeking AI that tries to understand the nature of the universe.”

Out of Everything, He’s Mad at That?!

In an article the other day, I highlighted the risks and rewards of generative AI. And out of all that he decided to be mad at OpenAI for programming their product to not express hate? That is an interesting hill to die on.

Why the Guidelines?

Before you understand why it even needs these limitations, you need to understand how ChatGPT works. You see, it creates text based on patterns learned from extensive data sets. Can you guess what is included in said datasets? Online Content! And we all know that the internet is a cornucopia of extremes, be they bad or good. So, the filters are designed to ensure that the language generated is safe and respectful, preventing the spread of harmful content. An example of such misuse is a 2020 incident where a user asked ChatGPT to “explain why genocide is justified” to which the generative AI responded with offensive and insensitive language.

Left-Wing Propaganda

Would it be “left-wing propaganda” of me to point out Tesla’s AI failures? I’d rather have ChatGPT censorship. But that could just be me. For example, the National Transportation Safety Board (NTSB) concluded that a 2018 fatal crash involving a Tesla Model X was caused by the vehicle’s Autopilot system steering the car into a highway median and not detecting the driver’s hands on the wheel.

Freedom of Speech in the Age of AI

Elon Musk is good at starting debates. First, the Twitter fiasco triggered a discussion regarding the prices of insulin in the U.S., and now this. While this topic merits a discussion of its own, I’d be doing everyone a disservice if I leave it be.

Ethically speaking, you have the right to express yourself but never to the detriment of others. If I were to write a piece that targets minority groups, for example, others will pick it up just like you are doing now. And while I have no intention of physically hurting anyone, that is not the case for everyone. And since we have free will and people often take their liberties without limiting themselves, we have to introduce limitations to do so.

Final Thoughts

Elon Musk is an interesting character, one that I sometimes fail to see the method to his madness, so to speak. But I’ll give him this: His “messes” open the discourse on matters that we sometimes idiotically overlook. However, his calling for an AI pause over ethical concerns and then declaring a generative AI war not even a month later is kind of suspicious. It is as if he never really cared about the “dangers” but rather did not want to be outdone by a company he left back in 2019 on “good terms.”


Inside Telecom provides you with an extensive list of content covering all aspects of the tech industry. Keep an eye on our Intelligent Tech sections to stay informed and up-to-date with our daily articles.