Character.AI is banning minors from AI character chats
Character.AI is banning minors from AI character chats
Homepage   /    entertainment   /    Character.AI is banning minors from AI character chats

Character.AI is banning minors from AI character chats

🕒︎ 2025-10-29

Copyright The Verge

Character.AI is banning minors from AI character chats

Character.AI is gradually shutting down chats for people under 18 and rolling out new ways to figure out if users are adults. The company announced Wednesday that under-18 users will be immediately limited to two hours of “open-ended chats” with its AI characters, and that limit will shrink to a complete ban from chats by November 25th. In the same announcement, the company says it’s rolling out a new in-house “age assurance model” that classifies a user’s age based on the type of characters they choose to chat with, in combination with other on-site or third-party data information. Both new and existing users will be run through the age model, and users flagged as under-18 will automatically be directed to the company’s teen-safe version of its chat, which it rolled out last year, until the November cutoff. Adults mistaken for minors can prove their age to the third-party verification site Persona, which will handle the sensitive data necessary to do so, such as showing a government ID. After the ban, teens will still be allowed on the site to revisit old chats and use non-chat features, such as creating characters and making videos, stories, and streams featuring characters. Character.AI CEO Karandeep Anand acknowledged to The Verge that users spend a “much smaller percentage” of time on these features than they do with the company’s flagship chatbot conversations, however — which is why it’s a “very, very bold move” for the company to limit chatbots, he said. Anand told The Verge in an interview that “sub-10 percent” of the company’s userbase self-reports as being under the age of 18. He added that the company does not have a way to find out the “real numbers” until it starts using the new age detection model. The amount of minors has declined over time, he said, as Character.AI has rolled out restrictions for underage users. “When we started making the changes of under 18 experiences earlier in the year, our under 18 user base did shrink, because those users went into other platforms, which are not as safe,” Anand said. Character.AI has been sued over allegations of wrongful death and negligence and deceptive trade practices by parents who say their children were drawn into inappropriate or harmful relationships with chatbots. The lawsuits target the company and its founders, Noam Shazeer and Daniel De Freitas, along with Google, the founders’ former workplace. Character.AI has repeatedly modified its services in the wake of the suits, including by directing users to the National Suicide Prevention Lifeline when certain phrases related to related to self-harm or suicide are used in the chat. Lawmakers are attempting to curb the growing industry of AI companions. A California bill passed in October requires developers to make clear to users that the chatbots are AI, not human, and a federal law proposed Tuesday would outright ban providing AI companions to minors. In addition to the teen model, the company has previously launched features like a voluntary ‘Parental Insights’ feature, which sends in a summary of a user’s activity, though not a complete log of their chats, to parents. But these features rely on a user’s self-reported age, which is easily faked. Other AI companies have recently imposed restrictions on teen users, like Meta, which changed its policies after Reuters reported on internal rules allowing AI chatbots to talk to minors in sensual ways. The company appears to anticipate that the move will disappoint its teen userbase: Character.AI says in the company statement that it is “deeply sorry” for eliminating “a key feature of our product” that most teens use “within the bounds of our content rules.” Of course, it’s still theoretically possible for an underage user to get past these new age assurance measures, Anand told The Verge. “In general, is there a case where somebody can always circumvent all possible age checks, including authentication? The answer is always yes.” The goal, he said, is better, not 100 percent, age verification accuracy. Character.AI had some age-related protections in place, such as not allowing a user to change their age after sign-up or create a new account with a different age. While general-purpose chatbots like ChatGPT and Gemini are heavily courting young users, “companion chatbot” services built to help users build relationships with virtual characters are often 18-plus. But Character.AI didn’t launch with an adults-only age limit, and its specific focus on fandom made it popular among teens. Now, Character.AI is also founding and bankrolling, at least initially, an independent nonprofit called the AI Safety Lab. The organization will focus on issues related to the AI entertainment industry, which, Anand said, encounters different problems than other AI sectors. The nonprofit will be staffed with Character.AI employees initially but Anand said the goal is for “this to be an industry partnership, not a Character entity.” He said details about founding partners and members external to the company will be made available in the coming weeks or months.

Guess You Like

The Outer Worlds 2 review – next generation role-playing
The Outer Worlds 2 review – next generation role-playing
The Outer Worlds 2 – a real ro...
2025-10-23
John Dickerson is leaving CBS News after 16 years
John Dickerson is leaving CBS News after 16 years
Veteran anchor John Dickerson ...
2025-10-28