AI Chatbots Are Unreliable Narrators Distorting News Events, Study Finds

: Generative AI news assistants like ChatGPT, Copilot, Gemini, and Perplexity often distort facts, fueling global concerns over unreliable AI-generated information.

AI chatbots are designed to inform the public, but it seems that instead of informing its misinforming, according to a new study by the European Broadcasting Union (EBU), highlighting how generative AI news assistants such as ChatGPT, Copilot, Gemini, and Perplexity frequently distort news content. 

Nearly half the time, these AI models – developed by industry leaders – are systematically generating factual errors, hallucinations, and biased summaries that distort the public’s understanding of reality. Whether this is happening intentionally or not, is yet to be determined. 

The study involves 22 public broadcasters across 18 countries with 14 different languages. It exposed that nearly half of all chatbot responses contained significant factual or sourcing issues, with researchers and media leaders urging these models’ parent companies to fix these free AI news generator flaws before misinformation completely wipes out public trust. 

AI Fake News Detection 

With everything unfolding in the world, trusted news has never been a bigger priority, and a problem. There’s an urgent need for “trusted news” verification systems and fact-checking protocols for these Large Language Models (LLMs), as any unchecked use is fundamentally threatening the acceleration of a misinformation cycle.  

Even users find it difficult to detect what’s real and what’s not. On one hand, legacy media outlets have become a vacuum of withheld information from the public, on the other hand, LLMs are also playing a dangerous role of presenting misleading misinformation. 

In May and June of 2025, the EBU study assessed thousands of chatbots and responses to standard news article AI generator.  

Results showed that 45% of answers had one issue in common, with 31% suffering from sourcing errors, such as citing unverifiable or incorrect attributes. Another 20% included factual inaccuracies, while 14% lacked adequate context. 

The fake AI media technologies that birthed mistakes ranged from relatively normal mistakes to very obvious factual mistakes.  

One of them, ChatGPT was found to have referred to Pope Francis as the sitting pontiff months after his death, while Perplexity incorrectly claimed that surrogacy was illegal in Czechia. In another generative AI news curator instance, Germany’s Olaf Scholz was named chancellor even after Friedrich Merz had taken office, and Jens Stoltenberg was listed as NATO’s secretary general following Mark Rutte’s appointment. 

“This research conclusively shows that these failings are not isolated incidents,” said Jean Philip De Tender, EBU’s deputy director general. He continued to state that “they are systemic, cross-border, and multilingual, and we believe this endangers public trust. When people don’t know what to trust, they end up trusting nothing at all, and that can deter democratic participation.” 

Gemini, Google’s AI assistant, performed the worst, with 76% of its responses containing major sourcing issues, while Copilot, ChatGPT, and Perplexity also failed key fact checking benchmarks. 

“People must be able to trust what they read, watch, and see,” said Pete Archer, the BBC’s head of AI, adding that “despite some improvements, it’s clear there are still significant issues with these assistants.” 

The generative AI news findings come at a time when 7% of online news consumers and 15% among those under 25 already use AI chatbots for news, according to the Reuters Institute’s Digital News Report 2025. At the end of the day this growing reliance, paired with such high error rates, presents a significant threat to the credibility of journalists. 

The Way LLMs Assemble AI News 

The report highlights a broader dilemma when it comes to generative AI news, how large language models (LLMs) process, interpret, and assemble news.  

The news journal of AI systems doesn’t understand the truth. They predict words based on probability, blending facts, opinions, and sources without clear differentiation. When fed unverified or misleading data, they replicate and amplify those errors at scale. 

Researchers warn that such AI-driven distortions could reshape how audiences perceive reality.  
“When these systems distort, misattribute or decontextualize trusted news, they undermine public trust,” the EBU said in a joint campaign statement titled “Facts In: Facts Out”: “If facts go in, facts must come out. AI tools must not compromise the integrity of the news they use.” 

This erosion of factual grounding is not just a technical flaw but a societal threat, and it is why users need to observe AI news. AI generates misinformation, deepfakes, and synthetic news to proliferate. It is why researchers warn that societies risk losing the truth behind news generated to them. Recent academic research suggests a different yet ironic outcome.  

In the 2025 study with Süddeutsche Zeitung, readers found that AI fake news that’s being generated and exposed to readers increased skepticism toward online information, it also boosted readership of trusted news outlets by over 2.5%.  

Readers sought credible sources to surf an increasingly unsafe digital environment, proving that trust itself becomes extremely valuable when misinformation is uncontrolled. 

Still, experts warn that maintaining credibility of the latest news of AI agents altering the flow of information in the new phase of generative AI news will demand continuous investment in AI literate journalism and fact checking transparency.  

The EBU along with partner organizations are urging governments to enforce stricter standards under existing digital integrity and media pluralism laws, parallel to independent monitoring of AI systems. 

As AI generated fake news evolves faster than regulators can adapt and catch up to, the central question remains, can journalism maintain its authority when truth itself is automated? The future of trustworthy information may depend not just on fixing flawed algorithms but on restoring humanity’s trust in what’s real. 


Inside Telecom provides you with an extensive list of content covering all aspects of the tech industry. Keep an eye on our Intelligent Tech sections to stay informed and up-to-date with our daily articles.