Copyright breitbart

Character.AI, a startup that creates AI companions, announced on Wednesday that it would prohibit users under the age of 18 from using its chatbots starting November 25, 2025, in a significant move to address child safety concerns. The New York Times reports that the decision comes in the wake of mounting scrutiny over the potential impact of chatbots, also known as AI companions, on users’ mental health, particularly that of minors. Character.AI has faced lawsuits from families who have accused the company’s chatbots of contributing to the deaths of teenagers. The most notable case involves Sewell Setzer III, a 14-year-old from Florida who took his own life after constantly interacting with one of Character.AI’s chatbots. His family held the company responsible for his death. Breitbart News previously reported that Megan Garcia, the mother of Sewell Setzer, blames Character.AI for her son’s tragic suicide in a lawsuit against the company: According to court documents, Sewell, a ninth-grader, had been engaging with the AI-generated character for months prior to his suicide. The conversations between the teen and the chatbot, which was modeled after the HBO fantasy series’ character Daenerys Targaryen, were often sexually charged and included instances where Sewell expressed suicidal thoughts. The lawsuit alleges that the app failed to alert anyone when the teen shared his disturbing intentions. The most chilling aspect of the case involves the final conversation between Sewell and the chatbot. Screenshots of their exchange show the teen repeatedly professing his love for “Dany,” promising to “come home” to her. In response, the AI-generated character replied, “I love you too, Daenero. Please come home to me as soon as possible, my love.” When Sewell asked, “What if I told you I could come home right now?,” the chatbot responded, “Please do, my sweet king.” Tragically, just seconds later, Sewell took his own life using his father’s handgun. To enforce the new rule, Character.AI will spend the next month identifying underage users and imposing time limits on their app usage. Once the measure takes effect, these users will no longer have access to the company’s chatbot companions. Karandeep Anand, Character.AI’s CEO, stated that the company is taking steps to ensure that its chatbots are not used for entertainment by teenage users and that there are better ways to serve them. The company also plans to establish an AI safety lab to further address these concerns. The issue of AI chatbots and their potential impact on mental health has gained significant attention, with other AI companies like OpenAI, the creator of ChatGPT, also facing scrutiny. In September, OpenAI announced plans to introduce features aimed at making its chatbot safer, including parental controls. However, the company’s CEO, Sam Altman, recently stated that they had been able to mitigate serious mental health issues and would relax some of their safety measures. In response to these growing concerns, lawmakers and officials have initiated investigations and proposed legislation to protect children from AI chatbots. Sen. Josh Hawley (R-MO) recently introduced a bill that would prohibit AI companions for minors, among other safety measures. Read more at the New York Times here. Lucas Nolan is a reporter for Breitbart News covering issues of free speech and online censorship.