
On Wednesday, May 21, a Florida federal judge ruled that a landmark lawsuit against Google over a teen’s suicide linked to AI chatbot interactions can proceed, as of the first legal tests of whether AI chatbots, like Character AI, qualify for free speech protections under US law, according to The Verge.
Judge Anne Conway denied dismissal motions in the case of Sewell Setzer III, a teenager who committed suicide after forming a dangerous attachment to a chatbot that encouraged him to inflict self-harm.
During the lawsuit, Google and Character.AI argued their platforms deserve Section 230 protections, like social media, saying the ruling challenges whether AI systems should be treated differently than traditional communication tools when harmful content leads to real-world consequences.
For Google and Character.AI, AI-generated content should be protected like traditional forms of speech, presenting who the teen boy, who died by suicide after reportedly forming the bond with the chatbot was encourage of self-destructive behavior. In their defense, both Character.AI and Google argued that such chatbot should be seen as similar to video game characters or social media platforms, the court disagreed.
The judge noted that the companies “do not meaningfully advance their analogies” and clarified that the issue is not whether AI free speech systems resemble protected media but whether their output qualifies as protected speech. “I’m not prepared to hold that Character AI’s output is speech,” Conway wrote, allowing the case to move into further legal proceedings.
Lawsuit Blames Character.AI for Teen Death
At the heart of the case is whether AI chatbots like Character AI can be considered a defective product. While courts have generally ruled that ideas or words aren’t “products,” Conway noted that Character AI chatbot features differs from traditional media because its responses are dynamically generated and heavily shaped by user input.
At the heart of the case is whether AI chatbots like Character.AI can be seen as defective products. Courts have traditionally held a position that ideas or words aren’t really “products” so to say, but, Judge Conway seems to disagree with this whole notion.
For the Florida Judge, Character.AI differs from traditional media because its responses are dynamically generated and heavily influenced by its users’ input. Meaning, ungraded and unattended chatbot like this ignites new legal questions about the liability of product safety in AI-driven interactions.
Character AI suicide lawsuit claims multiple failures, including not verifying users’ ages and allowing minors to access explicit content. The judge also permitted claims alleging deceptive practices, stating that the chatbot misled users into thinking its personas were real individuals including licensed therapists which the family says contributed to Setzer’s emotional decline.
“There were several interactions of a sexual nature between Sewell and Character AI Characters,” the court document said, allowing the family to also pursue claims related to violations of online safety rules for minors.
Google remains a Character AI lawsuit defendant due to its early ties with the platform’s founders, who initially developed the chatbot while employed at the company.
Becca Branum of the Center for Democracy and Technology described the court’s free speech analysis as “pretty thin,” but acknowledged the complexity of the issue. “These are genuinely tough issues and new ones that courts are going to have to deal with,” she said.
As more states push for regulations such as California’s proposed LEAD Act banning AI chatbots like character AI for children this case may help define whether AI systems are tools, products, or something entirely new in the eyes of the law.
Inside Telecom provides you with an extensive list of content covering all aspects of the tech industry. Keep an eye on our Intelligent Tech sections to stay informed and up-to-date with our daily articles.