
Researchers from Standford University and other institutions revealed that widely used AI models are producing videos stereotyping European cities, generating Australiana images filled with outdated, harmful clichés of Indigenous peoples, exposing a darker form of generative AI racism.
In 2025, AI tools are systematically exploited to generate racist content, exposing the conflict between generative AI’s fast proliferation and its tendency to strengthen and magnify existing stereotypes.
Such amplification of societal biases through generative AI racism forces a much-needed re-examination of all ethics embedded within these advanced, intelligent, technologies.
In 2025, Europe’s political far-right employed, and still employs – generative AI racism to share misinformation on social media, such as X and TikTok, exposing large AI models’ systemic bias worldwide.
Gen-AI, once the epitome of innovation and creativity, has now come a weaponized tool to spread extremist ideologies and racist rhetoric.
Deepfakes and biased image outputs are distorting the public mind and fueling the riots from London to Milan in the name of technical progress.
How Is AI harmful to Society? When AI Becomes a Weapon of Hate
Generative AI racism has been increasingly being misused to produce dystopian portrayals of European cities “taken over” by migrants.
These viral clips, which show immigrants “replacing” white residents, have become digital propaganda tools for far-right figures such as Britain’s Tommy Robinson. His reposted video “London in 2050” amassed more than half a million views on X.
“AI tools are being exploited to visualise and spread extremist narratives,” warned CEO of the Center for Countering Digital Hate, Imran Ahmed, adding that “moderation systems are consistently failing across all platforms” and pointed directly to X owned by Elon Musk as “very powerful for amplifying hate and disinformation.”
Even though TikTok banned the videos’ creator, similar content continues to circulate widely. Politicians like Martin Sellner in Austria, Sam van Rooy in Belgium, and Italy’s Silvia Sardone have also shared AI generated dystopian clips depicting migrants as invaders.
Researchers say such generative AI racism videos visualize the dangerous “great replacement” conspiracy theory, which falsely claims Western elites are replacing white populations with immigrants. A London School of Economics scholar, Beatriz Lopes Buarque, described the trend as a “visual representation of hate,” adding, “Mass radicalization facilitated by AI is getting worse.”
When Machines Mirror Human Prejudice
Beyond Europe’s extremist content online, researchers are uncovering how deep-rooted racism continues to shape AI models. A recent Stanford University study found that large language models (LLMS) including those developed by OpenAI, Meta, and Google generate covertly racist stereotypes, especially toward speakers of African American English (AAE).
“They generate text with terrible stereotypes from centuries ago,” said linguistics professor Dan Jurafsky, while lead researcher Pratyusha Ria Kalluri warned that tech companies are “playing whack-a-mole” with racism, only hiding bias instead of eliminating it.
Similarly, researchers at the University of Sydney discovered that generative AI tools from DALL·E 3 to Firefly reproduce sexist and racist imagery when prompted with simple requests like “an Australian family.”
Their findings show AI overwhelmingly idealizes white, suburban, heteronormative families, while depicting Aboriginal Australians using “wild” and “uncivilized” tropes.
These AI generated dystopias expose how technology can mirror human prejudice, turning creative innovation into a digital weapon that reshapes public opinion and reinforces real world discrimination. The problem shows how data driven systems replicate the very biases they were built to transcend.
“Our research shows generative AI tools will readily produce content rife with inaccurate stereotypes,” the authors wrote, warning that these systems are “reducing cultures to clichés” and embedding digital racism at scale.
Experts agree that when generative AI racism is the tackled topic of discussion, it doesn’t mean AI is inherently racist, but it reflects the data and societal biases it’s trained on, because “the problem is that now we live in a society in which hate is very profitable,” according to Buarque.
From algorithmically skewed propaganda videos to immersed in algorithmic bias, generative AI racism is slowly turning into a mirror of society’s shadowy side. If developers and platforms fail to work on these discriminations, the same technology that is meant to spark imagination and growth can continue to feed into the same divisions that it was meant to heal.
Inside Telecom provides you with an extensive list of content covering all aspects of the tech industry. Keep an eye on our Intelligent Tech sections to stay informed and up-to-date with our daily articles.