AI Music Tools Pushing Further Hate 

AI music tools are being used by malicious individuals to produce and spread hateful songs by generating hateful songs.

AI music tools are being used by malicious individuals to produce and spread hateful songs. 

By generating hateful songs though the use of AI, they promote homophobia, racism, as well as propaganda.  

ActiveFence, a service for managing trust and safety operations on online platforms, reports a significant surge in discussions between hate speech communities about leveraging AI music generation tools to create offensive songs that target minor groups. Such tools are also publishing instructions that help in misusing these technologies. 

These AI-generated tracks, which are discussed and elaborated in the forums and discussion boards, were mainly created to trigger hatred against several ethnic, racial, gender, as well as religious groups, along with promoting death, self-harm and terrorism. 

Using music, regardless of the method used to produce it, is not something new. Yet, nowadays, the main concern is that with the rise of AI music tools, individuals find it easier to produce songs with bad intentions without even the need for expertise. This is also similar to the impact of AI-generated images, videos and texts had on the spread of misinformation and hate speech. 

AI Music Tools Turn to Harmful Rhythms  

The content moderation company highlighted that these malicious actors have found ways to avoid content filters on some platforms, such as Udio and Suno, using phonating spellings and modified spacing for offensive terms. For instance, instead of writing the word ‘Satan’ they write it ‘say tan’ to prevent detection. 

In a response to this matter, a Udio spokesperson stated to TechCrunch that the company does not allow hate speech on its platforms, while Suno did not comment. 

In its investigation within communities, the digital safety provider, found AI-generated songs promoting harmful conspiracy theories and advocating for mass violence; songs containing slogans associated with extremist groups; and songs glorifying sexual violence against women. 

“AI makes harmful content more appealing – think of someone preaching a harmful narrative about a certain population and then imagine someone creating a rhyming song that makes it easy for everyone to sing and remember,” the ActiveFence spokesperson said.  

“They reinforce group solidarity, indoctrinate peripheral group members and are also used to shock and offend unaffiliated internet users.” 

Call for Stronger Measures 

To fight such acts, the firm urges music generation platforms to take more effective measures and conduct safety assessments to their tools. To this end, the spokesperson said, “Red teaming might potentially surface some of these vulnerabilities and can be done by simulating the behavior of threat actors.” 

Despite these solutions, content moderation is always at risk, given that malicious actors will find new ways to bypass filters. 

Final thoughts 

It is true that a firm like ActiveFence was able to detect AI generated hateful music, but the question is, wouldn’t the language itself be an obstacle to solving this issue? Especially when it comes to customized AI models for different purposes are dominant. 

Inside Telecom provides you with an extensive list of content covering all aspects of the tech industry. Keep an eye on our Intelligent Tech sections to stay informed and up-to-date with our daily articles.