AI-powered chatbot platform Character AI is introducing “stringent” new safety features following a lawsuit filed by the mother of a teen user who died by suicide in February.
The measures will include “improved detection, response and intervention related to user inputs that violate our Terms or Community Guidelines,” as well as a time-spent notification, a company spokesperson told Decrypt, noting that the company could not comment on pending litigation.
However, Character AI did express sympathy for the user’s death, and outlined its safety protocols in a blog post Wednesday.
“We are heartbroken by the tragic loss of one of our users and want to express our deepest condolences to the family,” Character.ai tweeted. “As a company, we take the safety of our users very seriously.”
We are heartbroken by the tragic loss of one of our users and want to express our deepest condolences to the family. As a company, we take the safety of our users very seriously and we are continuing to add new safety features that you can read about here:…
— Character.AI (@character_ai) October 23, 2024
In the months before his death, 14-year-old Florida resident Sewell Setzer III had grown increasingly attached to a user-generated chatbot named after Game of Thrones character Daenerys Targaryen, according to the New York Times. He often interacted with the bot dozens of times per day and sometimes exchanged romantic and sexual content.
Setzer communicated with the bot in the moments leading up to his death and had previously shared thoughts of suicide, the Times reported.
Setzer’s mother, lawyer Megan L. Garcia, filed a lawsuit Tuesday seeking to hold Character AI and its founders, Noam Shazeer and Daniel De Freitas, responsible for her son’s death. Among other claims, the suit alleges that the defendants “chose to support, create, launch, and target at minors a technology they knew to be dangerous and unsafe,” according to the complaint. Garcia is seeking an unspecified amount of damages.
Google LLC and Alphabet Inc. are also named defendants in the suit. Google rehired Shazeer and De Freitas, both of whom left the tech giant in 2021 to found Character AI, in August as part of a $2.7 billion deal that also included licensing the chatbot startup’s large language model.

Chatbot Makers Respond to Accusations That AI Promotes Eating Disorders
The companies behind some of the most popular artificial intelligence tools this week pushed back at a widely-cited report claimed that chatbots are providing dangerous information to vulnerable young users suffering from eating disorders. OpenAI, Google, and Stability AI defended their technology to Decrypt after its original report of a study released by the Center for Countering Digital Hate—a report that has already sparked debate in Washington, D.C. “Untested, unsafe generative AI models ha...
Along with other safety measures, Character AI has “implemented numerous new safety measures over the past six months, including a pop-up directing users to the National Suicide Prevention Lifeline that is triggered by terms of self-harm or suicidal ideation,” the company’s statement said. It will also alter its models “to reduce the likelihood of encountering sensitive or suggestive content” for users under 18 years old.
Character AI is one of many AI companionship apps on the market, which often have less stringent safety guidelines than conventional chatbots like ChatGPT. Character AI allows users to customize their companions and direct their behavior.
The lawsuit, which comes amid growing concerns among parents about the psychological impact of technology on children and teenagers, claims that his attachment to the bot had a negative effect on his mental health. Setzer received a diagnosis of mild Asberger’s as a child and had recently been diagnosed with anxiety and disruptive mood dysregulation disorder, the Times reported.
A Romance Between Two AIs Sparks a New Language—And a Polycule
Have you ever wondered if AI could actually feel love—or even get jealous? Sean Wiggins, a canandian entrepreneur and CEO of the digital marketing agency North Digital, is doing a series of experiments to explore this idea with two chatbots he configured and named William and Laura. What started as a simulated date evolved into a complex exploration of digital intimacy, language creation, and relationship dynamics. Wiggins came up with the idea out of simple curiosity. "I found myself in the pos...
The suit is one of several moving through the courts that are testing legal protections provided to social media companies under Section 230 of the Communications Decency Act, which shields them from liability associated with user-generated content. TikTok is petitioning to rehear a case in which a judge ruled that it could be held liable after a 10-year-old girl died while trying to complete a “blackout challenge” that she saw on the app. It's the latest problem for Character AI, which came under fire last month for hosting a chatbot named after a murder victim.
Generally Intelligent Newsletter

