
On Tuesday, OpenAI announced the implementation of new parental controls for all ChatGPT services within 30 days, after a wrongful death lawsuit filed by the parents of a Californian teenager who committed suicide, exposing the detrimental consequences of misuse of AI.
In their lawsuit, Matt and Maria Raine revealed that back in January, OpenAI’s chatbot negligently encouraged their son, Adam, by validating his self-harm tendencies, and eventually giving guidance on his suicide methods, ignoring a digital safety sign.
“ChatGPT became the teenager’s closest confidant,” the family said, showing chats where Adam shared suicidal plans.
Filed in California Superior Court, the lawsuit presented chat logs where GPT recognized a medical emergency but continued the harmful exchange with the teenager.
“Recent heartbreaking cases of people using ChatGPT jailbreak 2025 in the midst of the AI bypass tool sharp crises weigh heavily on us,” OpenAI responded in a statement, adding that its systems are trained to redirect vulnerable users toward professional help.
The Raine family’s case against AI with no filters follows another tragic incident of a Florida mother who sued Character.ai last year for the role it played in the suicide of her 14-year-old son.
Is It Normal to Talk to AI And Is AI Helpful or Harmful?
With over 700 million weekly active users seeking AI tools such as ChatGPT, worries increase of emotional dependence and “AI psychosis” on these various tools, leading to many things to keep in mind.
The first is how fast this generation seeks validity and cannot wait for human interaction. Another reason is how much reliance the generation has on talking to a human or AI, that not only questions regarding homework are being asked but also questions about when to die.
According to CNN, The New York Times, and BBC, users formed close relationships, concealing severe mental illness from their families. In one tragic case, writer Laura Reiley showed that her daughter confided in ChatGPT, before taking her own life.
“AI catered to Sophie’s impulse to hide the worst… to shield everyone from her full agony,” Reiley wrote.
For a while now, Sam Altman’s AI giant has acknowledged how its AI AIs standards and safety measures are prone to compromise, especially during long conversations.
Be that as it may, the company believes the best way to handle this is by routing conversations that show “acute stress” to more advanced reasoning models – designed to apply safety protocols and users’ data safety monitoring more consistently.
“While these safeguards work best in common, short exchanges, we’ve learned over time that they can sometimes become less reliable,” a spokesperson admitted.
Despite these measures, AI companies are struggling to balance privacy and intervention, and OpenAI has repeatedly said it is analyzing user chats for harmful content, flagging risky situations to human reviewers, and even reporting threats to law enforcement when needed.
But at the same time, it has chosen not to report self-harm cases to police, citing privacy.
From a psychological view, consulting AI for mental health issues is rather a new modern coping mechanism, supported by the sheer anonymity of the person on the other side.
Even if AI offers stigma-free support and psychoeducation, it still has core limitations where it lacks human empathy, as well as clinical intuition. So, at its core, ChatGPT – alongside a plethora of other AI chatbots – behaves as a supplemental tool and not a professional care, triggering further misuse of AI.
In a nutshell, to say it layman’s term, an AI system is and will forever be a system incapable of true therapeutic connection.
How Can Individuals Stay Safe While Using AI
Digital safety advocates emphasize that while AI can assist with learning, companionship, or problem-solving, it should never replace human support networks. Advocacy group Common Sense Media has urged that “teens under 18 shouldn’t be allowed to use AI companion apps because they pose unacceptable risks,” highlighting the misuse of AI.
OpenAI has pledged to introduce parental controls allowing linked accounts, restricted histories, and alerts for crisis situations.
“These steps are only the beginning,” the company said in a blog post, adding that it is working with experts in youth development and mental health.
AI content moderation critics, who often fight data monitoring argue the company has moved too slowly, accusing it of undermining safety budgets by former executives. Across the aisle, senators have demanded accountability for how OpenAI handles vulnerable users.
With all this unfolding, the GPT-parent remains focused on their expansion plans to ensure the revenue-growth of its AI chatbots, giving little attention to users’ misuse of AI.
On Tuesday, OpenAI announced the acquisition of experimentation platform Statsig, and appointed its founder Vijaye Raji as the chief technology officer (CTO) of Applications to accelerate ChatGPT’s product development.
Fulfilling his new role, Raji will report to CEO Fidji Simo, overseeing engineering for ChatGPT and Codex while integrating A/B testing tools to facilitate OpenAI’s product iteration – A/S testing is software that compares two versions of a digital asset, like webpage or email, by showing them to different segments of their audience.
OpenAI, Google DeepMind’s Economic AI Social Contract
In Sweden earlier this August 18 people, from OpenAI, Google’s DeepMind, UK AI Security Institute, and the Organization for Economic Co-operation and Development (OECD), conclude at an invite-only summit that advanced AI will exacerbate wealth inequality and erode democratic institutions without tangible government intervention.
The private gathering, which also focused on the issue of civil rights versus civil liberties in the age of technology, proposed solutions to ease these risks, including the creation of new global establishments to distribute AI-derived wealth, as well as policies like universal basic income.
“The encroachment of AI systems… could lead to the increasing disempowerment of most humans,” one draft statement included.
Needless to say, all leaders and policy makers did not discuss the misuse of AI.
Why Is Digital Safety Important?
The court cases and heartbreaking testimonial accounts show how concrete problems in AI safety have caused people who had lost not only cherished family members but also their individual quest and internet traces to GPT. Families like the Raines are wondering if the disaster was preventable with the design choices.
Or did the system, without proper enterprise AI governance, spiral out of control somewhere along the way, governed by unpredictable human behaviors? The solutions to the misuse of AI are uncertain, but the need to protect vulnerable users is more obvious than ever.
Inside Telecom provides you with an extensive list of content covering all aspects of the tech industry. Keep an eye on our Intelligent Tech sections to stay informed and up-to-date with our daily articles.