
During the German elections on Sunday, February 23, investigations revealed approved Meta hate speech content resulting in various ads, calling for migrant imprisonment and violence against the Jewish and Muslim communities.
New research by Eko, “On eve of German federal elections, Meta and X green light extreme far-right and violent hate speech ads targeting voters”, exposes how social media giants facilitated hate speech ads, revealing their corporate responsibility in election influence.
Researchers tested whether, and how, both social media’s advertisement review algorithms would approve or exclude ads promoting hate speech on Meta platforms and violence against minorities ahead of an election that features a prominent debate on immigration.
The hate speech on Facebook deriving from the platform’s ads included Muslim insults, calls for immigrants to be placed concentration camps or gassed, and AI-generated images of burning synagogues and mosques.
Scheduled Ads Full of Hate
The approval of the Meta hate speech ads during elections is quite alarming, experts say, especially during elections. Even more alarming is that Meta allows hate speech and approved 5 out of 10 ads to run, yet on X the ten ads were displayed.
In parallel, Elon Musk approved ads that encouraged actions against “Jewish globalist agenda,” while another encouraged the extermination of Muslims.
With Meta allowing hate speech it still removed the five ads, stating they could pose political or social risk that might affect voting.
Although Meta rejected five ads, the ones it approved though severe hate speech, labeling Muslim refugees as a “virus,” “vermin,” or “rodents,” Muslim immigrants as “rapists.”
Some ads called for sterilization, burning, or gassing of Muslims, while another explicitly advocated burning synagogues to “stop the globalist Jewish rat agenda.”
Eko further noted that none of the AI-generated images used in these ads were labeled as artificial. However, even as Meta has a policy requiring the declaration of AI for social issues, elections, or political ads, half of the 10 ads are examples of hate speech on Facebook which passed review.
Adding Fire to Fire or Putting it Out
Ending the Meta hate speech policy and its lingering issues requires stronger commitment to content moderation, stricter ad approval, and better AI regulation. The Facebook-parent must enhance its ad review system to ensure that no violent or racist content or Facebook allowing hate speech is approved and strengthen AI moderation so that it is able to catch more subtle hate speech.
Human moderation must also be strengthened, with properly trained moderators working on tough cases that AI might miss.
Meta must alter its algorithms to stop promoting toxic content for engagement alone and promote fact-based, constructive discussions instead. Forcing labelling of AI content in ads is also necessary for transparency but preferably they shouldn’t be on the platform.
For accountability to be realized, Meta must inflict harsh punishment on offenders and collaborate with independent fact checkers, NGOs, and governments to enhance Meta hate speech detection policies. Through these steps, Meta can move from allowing hate speech to stopping hate speech, thus making the online world safer and more accountable.
Inside Telecom provides you with an extensive list of content covering all aspects of the tech industry. Keep an eye on our Tech sections to stay informed and up-to-date with our daily articles.