“We prioritize safety ahead of privacy and freedom for teens; this is a new and powerful technology, and we believe minors need significant protection,” CEO Sam Altman explained in a blog post on Tuesday.
ChatGPT will direct under 18 users to the experience specifically created for kids. If the person’s age is unclear, the technology will default to the experience for kids. However, OpenAI says it’s also developing “a technology to better predict a user’s age,” too. “In some cases or countries we may also ask for an ID; we know this is a privacy compromise for adults but believe it is a worthy tradeoff,” the blog explained.
The ChatGPT for users under 18 was designed with some new parental controls, such as “blockout hours” when kids can’t talk to ChatGPT. It blocks sexual content, can’t flirt, and won’t engage in discussions about self-harm. Altman said that OpenAI will flag such messages and contact a user’s guardian if suicidal thoughts are mentioned. If they can’t be reached, OpenAI will reach out to the authorities “in case of imminent harm,” it noted.
The new kid-friendly experience comes less than a week after the Federal Trade Commission (FTC) announced an investigation into how AI companies, including OpenAI, impact the well-being of children. “AI chatbots can effectively mimic human characteristics, emotions, and intentions, and generally are designed to communicate like a friend or confidant, which may prompt some users, especially children and teens, to trust and form relationships with chatbots,” the FTC said.
At the time, OpenAI said that making the technology “safe for everyone” is its top concern. “We recognize the FTC has open questions and concerns, and we’re committed to engaging constructively and responding to them directly,” an OpenAI spokesperson said.
According to the announcement, the ChatGPT for users under 18 will be available at the end of the month.