FBI Alleges Citizen Used AI to Generate 13,000 Children Sexual Abuse Pictures

Steven Anderegg allegedly used the Stable Diffusion AI model to generate photos; if convicted, he could face up to 70 years in prison.

Steven Anderegg allegedly used the Stable Diffusion AI model to generate photos; if convicted, he could face up to 70 years in prison.

The FBI has charged Steven Anderegg, a 42-year-old man from Wisconsin, with creating and distributing over 10,000 sexually explicit images using AI. These images featured children and were shared, including with a minor, through social media platforms like Instagram.

Anderegg is accused of using the AI model Stable Diffusion to generate these images, which included detailed and explicit content. His activities came under scrutiny after the National Center for Missing & Exploited Children (NCMEC) received complaints about his social media conduct, leading to a federal investigation.

Authorities executed a search warrant on Anderegg’s residence, where they seized his computer and uncovered thousands of AI-generated images. The investigation revealed Anderegg’s use of specific prompts to create objectionable material.

Charged with multiple counts of creating, distributing, and possessing child exploitation material and sending explicit material to a minor, Anderegg faces a potential maximum sentence of 70 years if convicted. This case marks one of the first instances where the FBI has pursued charges related to the creation of AI-generated explicit material.

The rise of generative AI has sparked concerns among child safety advocates and researchers about its potential misuse. Reports to NCMEC have increased significantly, partly due to the proliferation of AI-generated abusive material. This trend poses new challenges for law enforcement and child protection agencies.

“The justice department will aggressively pursue those who produce and distribute child sexual abuse material – or CSAM – no matter how that material was created,” stated Deputy Attorney General Lisa Monaco. “CSAM generated by AI is still CSAM, and we will hold accountable those who exploit AI to create obscene, abusive, and increasingly photorealistic images of children.”

Stability AI, the company behind Stable Diffusion, has stated that it prohibits the use of its technology for creating illegal content and has implemented safeguards to prevent misuse.

For those affected by child exploitation, resources and support are available through various hotlines and support organizations globally. In the US, individuals can contact the Childhelp hotline, and in the UK, the NSPCC offers support for children and adults concerned about child safety.

Inside Telecom provides you with an extensive list of content covering all aspects of the tech industry. Keep an eye on our Intelligent Tech sections to stay informed and up-to-date with our daily articles.