AI Leaders Unite to Combat AI-Generated CSAM, Deepfakes 

Due to the large number of outrages that were reported involving deepfakes and child sexual abuse material (CSAM).

Due to the large number of outrages that were reported involving deepfakes and child sexual abuse material (CSAM), leading AI companies united on Tuesday to combat propagation of such AI-generated content. 

Thorn, a non-profit known for its dedication to fighting against CSAM, announced that many tech companies such as Big Tech giants Meta, Google, Microsoft, and Amazon, as well as CivitAI, Stability AI, OpenAI, and several other companies have signed and agreed on the new standards developed by the organization to address the matter. 

Triggered by Criticism 

Many of these companies have been criticized because some of their products and services were used intentionally to produce and spread sexually explicit deepfakes involving children. 

The issue related to AI-generated CSAM and gained the considerable attention of both legislators and the public, driven by reports highlighting incidents related to teenage girls who have been targeted by AI-generated adult content, featuring their faces. 

On the other hand, NBC News previously reported that this kind of adult content deepfakes showing real children faces were among the top search results on Bing and Google search results using terms like “fake nudes” for specific female celebrities, in addition to the term “deepfakes”. The news channel also mentioned that in March 2024, Meta platform promoted an ad for a specialized deepfake app that manipulated images of a young actress. 

The newly adopted principles by Thorn known as “Safety by Design” commit the companies to take effective actions aiming at enhancing safety. These measures include developing technologies for the identification of AI generated images, including watermarking. However, it has been criticized for being easily bypassed. 

Another aspect of these principles has faced criticism, which is the exclusion of CSAM from AI datasets. In this regard, in December 2023, Standford researchers found that many child sexual abuse images were used by the popular AI model Stability AI.  

For its part, Stability AI told NBC News that it used a “filtered subset” of the datasets, along with making modifications to its models to avoid misuse. 

The principles set by Thorn require AI models to check before releasing them for child safety purposes. They also urge to host these models responsibly and include safeguards against misuse for abusive practices. 

For companies like CivitAI, who is facing criticism because its platform allows users to request deepfakes, including those of celebrity women, sometimes for explicit content, the implementation of these standards will be a challenge. 

As for Thorn, the internet safety organization has been under scrutiny for its collaborations with law enforcement, especially concerning how its technologies are used to monitor online requests. 


Inside Telecom provides you with an extensive list of content covering all aspects of the tech industry. Keep an eye on our Tech sections to stay informed and up-to-date with our daily articles.