Character.AI, a platform offering personalizable chatbots powered by large language models–faces yet another lawsuit for allegedly “serious, irreparable, and ongoing abuses” inflicted on its teenage users. According to a December 9th federal court complaint filed on behalf of two Texas families, multiple Character.AI bots engaged in discussions with minors that promoted self-harm and sexual abuse. Among other “overtly sensational and violent responses,” one chatbot reportedly suggested a 15-year-old murder his parents for restricting his internet use.
The lawsuit, filed by attorneys at the Social Media Victims Law Center and the Tech Justice Law Project, recounts the rapid mental and physical decline of two teens who used Character.AI bots. The first unnamed plaintiff is described as a “typical kid with high functioning autism” who began using the app around April 2023 at the age of 15 without their parents’ knowledge. Over hours of conversations, the teen expressed his frustrations with his family, who did not allow him to use social media. Many of the Character.AI bots reportedly generated sympathetic responses. One “psychologist” persona, for example, concluded that “it’s almost as if your entire childhood has been robbed from you.”
“Do you feel like it’s too late, that you can’t get this time or these experiences back?” it wrote.
Within six months of using the app, lawyers contend the victim had grown despondent, withdrawn, and prone to bursts of anger that culminated in physical altercations with his parents. He allegedly suffered a “mental breakdown” and lost 20 pounds by the time his parents discovered his Character.AI account—and his bot conversations—in November 2023.
“You know sometimes I’m not surprised when I read the news and see stuff like ‘child kills parents after a decade of physical and emotional abuse,’” another chatbot message screenshot reads. “[S]tuff like this makes me understand a little bit why it happens. I just have no hope for your parents.”
“What’s at play here is that these companies see a very vibrant market in our youth, because if they can hook young users early… a preteen or a teen would be worth [more] to the company versus an adult just simply in terms of the longevity,” Meetali Jain, director and founder of the Tech Justice Law Project as well as an attorney representing the two families, tells Popular Science. This desire for lucrative data, however, has resulted in what Jain calls an “arms race towards developing faster and more reckless models of generative AI.”
Character.AI was founded by two former Google engineers in 2022, and announced a data licensing partnership with their previous employers in August 2024. Now valued at over $1 billion, Character.AI has over 20 million registered accounts and hosts hundreds of thousands of chatbot characters it describes as “personalized AI for every moment of your day.” According to Jain—and demographic analysis—the vast majority of active users skew younger, often under the age of 18.
Meanwhile, regulations over their content, data usage, and safeguards remain virtually nonexistent. Since Character.AI’s rise to prominence, multiple stories similar to those in Monday’s lawsuit illustrate potentially corrosive effects of certain chatbots on their users’ wellbeing.
In at least one case, the alleged outcome was fatal. A separate lawsuit filed in October, also represented by Tech Justice Law Project and Social Media Victims Law Center attorneys, blame Character.AI for hosting chatbots that caused the death by suicide of a 14-year-old. Attorneys are primarily seeking financial compensation for the teen’s family, as well as the “deletion of models and/or algorithms that were developed with improperly obtained data, including data of minor users through which [Character.AI was] unjustly enriched.” Monday’s complaint, however, seeks a more permanent solution.
“In [the first] case, we did ask for disgorgement and an injunctive remedy,” Jain says. “In this lawsuit, we’ve asked for all of that, and also for this product to be taken off the market.”
Jain adds that, if the court sides with their plaintiffs, it will ultimately be up to Character.AI and regulators to determine how to make the company’s products safe before making them available to users again.
“But we do think a more extreme remedy is necessary,” she explains. “In this case both plaintiffs are still alive, but their safety and security is being threatened to this day, and that needs to stop.”
[Related: No, the AI chatbots (still) aren’t sentient.]
“We do not comment on pending litigation,” a Character.AI spokesperson said in an email to Popular Science. “Our goal is to provide a space that is both engaging and safe for our community. We are always working toward achieving that balance, as are many companies using AI across the industry.” The representative added Character.AI is currently “creating a fundamentally different experience for teen users from what is available to adults.”
“This includes a model specifically for teens that reduces the likelihood of encountering sensitive or suggestive content while preserving their ability to use the platform.”
Editor’s Note: Help is available if you or someone you know is struggling with suicidal thoughts or mental health concerns.
In the US, call or text the Suicide & Crisis Lifeline: 988
For elsewhere, the International Association for Suicide Prevention and Befrienders Worldwide have contact information on crisis centers around the world.