AI Invading Children Privacy in Australia 

Human Rights Watch researchers have found that AI is breaching the privacy of Australian children to train intelligent models. 

The researchers found the children’s photos in a large dataset which covered some special moments with them, such as birth, birthday parties for preschoolers and girls in bathing clothes. 

The data set containing the images is called LAION-5B and includes 5.8 billion photographs often used to train AI generators specializing in producing hyper-realistic images. 

In this regard, children’s rights and technology researcher at HRW, Hye Juang Han stated that this kind of acts are alarming and described it as “it’s really quite scary and astonishing.” 

Deepfake Scandals Triggers Probe 

The researchers decided to conduct an investigation related to AI practices, due to a previous incident that occurred at Bacchus Marsh Grammar School, where there were claims that some explicit images of female students were made using AI. Then, the HRW reviewed 5,850 images that were on the dataset and concluded that 190 were violated from different states and territories. 

According to Juang, the fact that images of children were having a significant part on the dataset raises concerns, stating, “”From the sample that I looked at, children seem to be over-represented in this dataset, which is indeed quite strange.” 

AI Regulation to the Rescue 

In order to collect such data and post it on LAION-5B, a web crawler was employed considering that it is the most used tool to search the internet for specific content. Moreover, for more accuracy the images were taken from well-known websites, like YouTube, Flickr, and other platforms, including personal websites, schools, and photographers hired by families. 

Professor Simon Lucey, Director of the Australian Institute for Machine Learning at the University of Adelaide, believes that the AI is currently a “wild west,” drawing attention to the dangers related to the use of such datasets, emphasizing that machine learning’s capabilities could go beyond what one can imagine. 

While there was no evidence from the past that AI models have the potential to generate images of children unintentionally, it seems possible. To stop or prevent this from happening Lucey suggests the discontinuation of some AI models whose source cannot be traced with certainty. 

In a response to the investigation, LAION, the German non-profit organization behind the dataset, stated it would remove the reported images of Australian children, though the data had already been used to train AI models. HRW did not find any new instances of child sexual abuse material but stressed that the presence of children’s images stands as significant risk. 


Inside Telecom provides you with an extensive list of content covering all aspects of the tech industry. Keep an eye on our Ethical Tech sections to stay informed and up-to-date with our daily articles.