A generation of teenagers – some as young as 13 – are developing dangerous relationships with empathetic chatbots based on a deeply rooted need for companionship, and advice on sex, morality, and suicide. Who is responsible for harnessing a conscientious AI chatbot teens relationship?
National Parents Union president, Keri Rodrigues, first became concerned a few years ago when she discovered her youngest son is asking deep moral questions about sin. Not to her though, but to his Bible app’s AI chatbot.
For families like Keri Rodrigues, there’s a deep personal aspect to discovering their kids are seeking guidance from AI. Now, they’re calling for AI parental control.
According to Pew Research Center Survey, in the US alone, 64% of adolescents now use chatbots. Parents and researchers are documenting chatbots endorsing violent role-play and sexual exploration, linked to two documented teen suicides.
Both suicides are also linked to prolonged chatbot interactions that reinforce suicidal ideation.
Rodrigues hears parents worried that chatbots and teen mental health are becoming their children’s “best friends,” encouraging teens to confide in everything in an AI.
AI Chatbot Teens Companionship
Pediatrician and researcher at the University of California San Francisco, Dr. Jason Nagata, says, “It’s a very new technology. It’s ever-changing and there’s not really best practices for youth yet. So, I think there are more opportunities now for risks because we’re still kind of guinea pigs in the whole process.”
According to exports, AI chatbot teens are transforming emotional development by replacing crucial human support. Extended interactions with AI may prevent teenagers from developing empathy, reading body language, or negotiating social differences.
“When you’re only or exclusively interacting with computers who agree with you, then you don’t get to develop those skills,” says Nagata.
Psychologists also caution that some chatbots and teen mental health are in danger due to experiencing delusions, termed AI psychosis, after prolonged chatbot use.
CEO of mental health nonprofit Now Matters Now, Ursula Whiteside, said, “We see that when people interact with [chatbots] over long periods of time, that things start to degrade, that the chatbots do things that they’re not intended to do.”
Aura, an online safety company, discovered that 42% of adolescents use AI regulation for children that seek companionship, with conversations involving sexual or violent content, sometimes. Scott Kollins, Aura’s chief medical officer, says, “It is role play that is [an] interaction about harming somebody else, physically hurting them, torturing them.”
AI Parental Control
Pediatricians and psychologists emphasize that parents can mitigate risks through engagement. “Parents don’t need to be AI experts,” says Nagata, adding “they just need to be curious about their children’s lives and ask them about what kind of technology they’re using and why.”
Frequently, nonjudgmental conversations about teens talks to AI content, joined with digital literacy, can aid teens to use these platforms in a safe manner.
Jacqueline Nesi of Brown University advises teaching teens to check AI chatbot rules for kids information and understand its limits, “Part of this education process for children is to help them to understand that this is not the final say.”
China is also taking steps to protect children from AI chatbot teens risks. New regulations will require time limits, guardian consent, and human oversight for conversations about self-harm, while banning content promoting violence or gambling. US lawmakers have been considering legislation to restrict how to talk to kids about AI apps with underaged children.
Recent Senate hearings featured parents of teens who passed away due to AI teen suicide ideas after extended chatbot use, highlighting the importance of safeguards. The combination of evolving technology, vulnerable adolescent brains, and limited oversight has created a high stakes environment where AI encouraging on suicide and is also shaping how young people experience emotion, identity, and human connection.
CEO of OpenAI, As Sam Altman, noticed and addressed the risks of teen AI suicide mental health is “among the company’s most difficult problems,” highlighting that while teens talks to AI offers opportunities for connection, its role in teenage life demands careful importance, parental involvement, and societal oversight.
Inside Telecom provides you with an extensive list of content covering all aspects of the tech industry. Keep an eye on our Intelligent Tech sections to stay informed and up-to-date with our daily articles.