Copyright bbc

The firm said it was making the changes after "reports and feedback from regulators, safety experts, and parents", which have highlighted concerns about its chatbots' interactions with teens. Experts have previously warned the potential for AI chatbots to make things up, be overly-encouraging, and feign empathy can pose risks to young and vulnerable people. "Today's announcement is a continuation of our general belief that we need to keep building the safest AI platform on the planet for entertainment purposes," Character.ai boss Karandeep Anand told BBC News. He said AI safety was "a moving target" but something the company had taken an "aggressive" approach to, with parental controls and guardrails. Online safety group Internet Matters welcomed the announcement, but it said safety measures should have been built in from the start. "Our own research shows that children are exposed to harmful content and put at risk when engaging with AI, including AI chatbots," it said. It called on "platforms, parents and policy makers" to make sure children's experiences using AI are safe. Character.ai has been criticised in the past for hosting potentially harmful or offensive chatbots that children could talk to. Avatars impersonating British teenagers Brianna Ghey, who was murdered in 2023, and Molly Russell, who took her life at the age of 14 after viewing suicide material online, were discovered on the site in 2024 before being taken down. Later, in 2025, the Bureau of Investigative Journalism (TBIJ) found a chatbot based on paedophile Jeffrey Epstein which had logged more than 3,000 chats with users. The outlet reported the "Bestie Epstein" avatar continued to flirt with its reporter after they said they were a child. It was one of several bots flagged by TBIJ that were subsequently taken down by Character.ai.