Misinformation is no longer just about falsehoods, but about storytelling. Viral posts and conspiracy theories spread widely not only because they mislead, but due to how they emotionally resonate with audiences, making it harder to counter without resiliency through digital literacy.
Recent studies show that AI can help detect manipulative narratives, but it cannot fully replace human judgment. A study titled “The Influence of AI Authorship Labels on News Perception” by Zhao, Zhou, and Wang tested 1,600 participants and found that news labeled as AI-generated consistently received lower evaluations—yet emotionally charged stories still drove stronger engagement, highlighting values of fighting misinformation, beyond labeling alone.
“Emotional resonance can override source scepticism,” the authors wrote, stressing that labelling alone cannot stop manipulative AI-generated content from spreading. The findings also suggest that critical thinking AI tools must be paired with human oversight to be effective.
Further research by Luttrell, Davis, and Welch showed how AI-authored text can mimic human writing so well that traditional detection tools often fail. They argue that no single fix can shield journalism from deepfake and misinformation threats.
Instead, layered defenses—detection systems, provenance verification, and human editorial oversight—are essential. Alongside these measures, digital literacy assessment programs will be vital to strengthen society’s resilience.
Journalists are already stressed. When 504 reporters in the Basque Country were surveyed, nearly 90% expressed a belief that AI will worsen the threats of disinformation. More veteran journalists were particularly worried, with many citing how difficult it would be to identify deepfakes or content that mixes facts with manipulation. For them, tools such as AI information retrieval and AI driven information discoverycould help—but only if combined with editorial training.
Resiliency Through Digital Literacy and Cultural Nuance
Florida International University researchers revealed disinformation typically arrives in the form of “weaponized storytelling.” Here, narrative techniques such as persona cues, cultural symbols, and structuring are specifically constructed to elicit emotions.
Machine algorithms can analyze usernames, rate of posting, and symbolic imagery to ascertain such manipulations, generally going unnoticed by traditional fact-checkers. Such insights are even influencing AI in media planning, where understanding cultural nuance is essential to fight misinformation online.
While Western readers would associate it with weddings, in certain Asian cultures the color white is a symbol of death. Training AI on such cultural differences, researchers found, improves its sensitivity to misleading narratives. Without this cultural training, deepfake propaganda could exploit symbolic cues to manipulate audiences.
The findings point to a critical aspect that can be read and flagged by AI but must be understood by humans in the context of emotion and culture.
Misinformation and Deepfakes in Ethical Technology
Despite the promise, AI also poses unique ethical issues. Deepfakes, manipulated data sets, and affective manipulating content can undermine trust in media and institutions. As one editorial grumbled, “Fact-checking alone is no longer sufficient. This includes preventing AI-generated propaganda from dominating narratives and embedding safeguards within AI systems.
Policymakers and journalists emphasize the role of AI ethics society, ensuring that algorithms are transparent and accountable.
Game on without strong checks, deception, not fact, could increasingly set the resiliency through digital literacy agenda. The art of finding the balance between the promise of AI and the threat of deception will set the course for responsible technology in the future.
Inside Telecom provides you with an extensive list of content covering all aspects of the tech industry. Keep an eye on our Intelligent Tech sections to stay informed and up-to-date with our daily articles.