
Since the Israel-Iran digital war started, deepfake videos, photographs, and chatbot misinformation dominated social media with the advanced AI tools creating realistic, but false, content faster than fact-checkers can respond, restricting disinformation detection techniques from countering propaganda.
The use of AI misinformation is a new form of war conduct, not just weapons, but technology with the potential to shape people’s perceptions and beliefs.
With the increased sophistication of AI software, fueled by machine disinformation generators and transformer-based misinformation detection, stories are tailored to meet one’s goals, which on its own, makes it harder to separate truth from lies for users and platforms.
Technology Driving Waves of Fraud
Since the initiated attacks on Iran, social media platforms have seen a flood of deep learning fake news and generated videos and images showing damage and military triumphs. They have included deepfake videos of devastation within targeted and infrastructure or revealing missiles and fighter jets never to have existed.
“These tools are being leveraged to manipulate public perception, often amplifying divisive or misleading narratives with unprecedented scale and sophistication,” said BitMindAI creator, Ken Jon Miyachi.
Google’s Veo 3 AI video generator, famous for its hyper-realistic images, has been associated also with some of the most realistic fake videos.
“It is no surprise that as generative-AI tools continue to improve in photo-realism, they are being misused to spread misinformation and sow confusion,” said Hany Farid, the co-founder of GetReal Security.
NewsGuard identified over 50 websites selling false reports of Israel pilot being taken captive by Iran or widespread destruction hitting Israeli cities. Adding to the confusion, video recordings of military simulation games have been mistakenly used for actual combat.
A vital TikTok video showing an Israeli plane being shot down was removed after fact-checkers revealed it was simulation game footage. Even AI-fact checking tools have made fakes seem real, fueling public doubt, and highlighting urgent needs to improve social media misinformation detection to counter advancing AI deception.
Updated Misinformation Detection Techniques
On X, TikTok, and Instagram, artificially generated videos and photos have gained tens of millions of views.
Three viral deepfakes, falsely depicting military victories, surpassed 100 million views combined. Pro-Israel outlets have also spread a wave of misinformation-based content, through recycled video of Gaza, saying “This is Tel Aviv” to exaggerate discontent with the Iranian regime.
“We haven’t yet confirmed any video of the F-35s brought down,” said Lisa Kaplan, CEO of analyst firm Alethea, citing fakes against Israel’s sophisticated F-35 fighter aircraft.
Also stating that Russian influence networks might be responsible for some of the machine learning fake news, attempting to undermine confidence in Western military strength.
The rapid spread of AI-fakes betrays a wider crisis for user online trust. Miyachi stressed the urgent need for better misinformation detection techniques to protect the integrity of public discourse.
Enhancing deepfake detection techniques is still a critical and very important measure towards overcoming this digital disinformation warfare. AI technologies will get smarter with time, and global powers perceive it as powerful tools of digital warfare, letting slide controlling people with fake news and chatbot manipulation faster than the truth can possibly keep up.
Now that the truth is revealed, the new reality demands ongoing improvement of misinformation detection techniques, and media literacy, incorporating deepfake detection techniques and social media false news detection to protect the target audience and platforms against this digital warfare threat.
Inside Telecom provides you with an extensive list of content covering all aspects of the tech industry. Keep an eye on our Tech sections to stay informed and up-to-date with our daily articles.