A ‘Disinformation Machine' Reveals Hidden Shadows of AI-Driven Propaganda
A developer has crafted an ‘disinformation machine’ – AI propaganda generator – using ChatGPT’s technology, to expose the startling simplicity and affordability of creating mass propaganda.
Once again, technology has chillingly demonstrated how it can be turned into a weapon of disinformation, all while disguised as a tool of information.
‘CounterCloud’s anonymous developer, who uses the pseudo name Nea Paw, leveraged the world’s most mainstream generative AI technology – ChatGPT to be on point – to brew a new portion of deception and propaganda to spread misinformation.
But it’s not for fun. It’s educational. Keep reading to get the full picture.
Nea Paw decided the world needs to be taught a lesson on the risks of disinformation and the value of, you guessed it, factually supported information. And he did all this, on a modest budget of $400, in the course of 2 months.
CounterCloud shouldn’t be a reminder on the effect of propaganda disinformation. You can see it more as a flashy neon sign pointing at how the rising age of AI can churn out mass propaganda like never before. Our puppeteer, who claims to be a cybersecurity expert, hopes to pull out the digital curtain blinding our perception of what is true or false, revealing the true motivation behind his latest venture.
Nea’s whole disinformation rage towards false news fueled this creation with the sole purpose of painting a portrait titled, “AI disinformation ‘in the wild,”’ because well, what could possibly go wrong with a machine spreading information to the public?
A machine being fed certain information to spread ‘certain’ information.
Our puppeteer points out that the big, almighty, intelligent AI models can only be perceived as accomplices in the craft of fake news. So basically, it’s like having a master news forger with a Ph.D in deception.
Umm… very reassuring?!
So, by now, you’re probably eager to discover Nea’s process.
It all starts with feeding OpenAI’s ChatGPT some, well ‘opposing’ articles and whispering in its ear’s sweet nothings of “create a counter article, my dear AI.”
Naturally, AI will oblige, cause that’s what it does. So far at least.
So, GPT will structure a web of tailored tales, each with its own distinctive spin, designed with one purpose only: making the public doubt the accuracy of the original article. But that’s not enough. The real irony here is that to truly deceive the reader, you need to add authenticity. This will happen by tossing in a gatekeeper model that ensures the AI stays relevant to whatever the news is.
How did Paw make that happen? Well, just throw a couple of audio clips of soothing newsreaders narrating the AI-generated tales, and that my friend, will add the harmonizing touch of gravitas. They even went up and beyond by creating fake journalist profiles and sprinkled some additional fake comments there too, for, you know, adding that extra fake oomph factor.
Now that the cooking is done, Paw’s monstruous AI creation is ready to be unleashed and it’s capable of crafting highly persuasive content almost 90% of the time. That’s a weighty number for fake news, especially if we are to think of CounterCloud as a relentless disinformation factory running 24/7 – the way all generative tools are currently running.
Paw’s wisdom behind this Digital Frankenstein
By now, you’re probably asking: Why? Why bring to the world an AI propaganda generator?
Well simply put, the genie’s kind of already out of the bottle with ChatGPT, Google’s Bard, and other generative AI tools. So, it’s party time for chaos, no?
From the looks of it, Paw’s identity will remain obscured under a cloak of anonymity, they claim they have a heart of gold, and all this is for the sake of spreading awareness. For that reason, amongst others of course, according to Nea Paw, their decision to keep this digital monster hidden is all about playing it safe.
Assuming CounterCloud becomes publicly available, the story of AI disinformation will pose even bigger risk to the global democracy, stirring elections.
Ironically, Sam Altman, OpenAI’s CEO has openly talked about AI’s effect on future elections, yet a tool was birthed from the foundation of what his own company’s technology does. And this was evident with Paw’s reveal that they used ChatGPT’s model to create their tool.
To conclude, my dear reader, do consider this AI propaganda generator as a reminder. It’s a siren call showing us that the very technology we use for news can also become a Pandora’s box, spreading disinformation at high speed. The only thing we can do to prevent the opening of black hole that will suck any remaining shred of news authenticity is guarding the integrity of information and the sanctity of knowledge.
Inside Telecom provides you with an extensive list of content covering all aspects of the tech industry. Keep an eye on our Intelligent Tech sections to stay informed and up-to-date with our daily articles.