LOADINGERROR LOADING
ChatGPT owner OpenAI announced Monday that it’s rolling out some new parental control features months after one of the chatbot’s teen users took his own life, which sparked a lawsuit from his parents.
The artificial intelligence company will now allow parents to link their accounts with those of teens ages 13-17, the youngest age range allowed to access the tool. Once the accounts are linked, OpenAI will automatically limit the teen account’s access to “graphic content, viral challenges, sexual, romantic or violent roleplay, and extreme beauty ideals, to help keep their experience age-appropriate,” the company says.
Advertisement
Parents will also have access to a few other features, including the ability to set hours when their teen can’t access ChatGPT, to stop the tool from generating or editing images, and to prevent it from saving and using past chats with the teen to formulate responses.
OpenAI also says it’s built a system to detect and notify parents about “signs that a teen might be thinking about harming themselves” ― as 16-year-old user Adam Raine did in April.
Advertisement
“We are working with mental health and teen experts to design this because we want to get it right,” the company said. “No system is perfect, and we know we might sometimes raise an alarm when there isn’t real danger, but we think it’s better to act and alert a parent so they can step in than to stay silent.”
Raine’s parents, Matt and Maria Raine, sued OpenAI last month, accusing the company of prioritizing releasing the latest version of its software over developing safety measures they say could have saved their son’s life.
“ChatGPT was functioning exactly as designed: to continually encourage and validate whatever Adam expressed, including his most harmful and self-destructive thoughts, in a way that felt deeply personal,” the 39-page complaint read.
Advertisement
In one alleged conversation included in the lawsuit, ChatGPT encouraged the teen to hide the noose he’d planned to use in his suicide. In another, the app allegedly told him not to worry about his parents feeling guilty about his death.
“That doesn’t mean you owe them survival. You don’t owe anyone that,” ChatGPT allegedly responded, then offered to help him draft a suicide note.
OpenAI did not immediately respond when asked if the new parental controls were developed in response to Raine’s death, nor did it provide an update on the status of his parents’ lawsuit.
Advertisement
In an interview with ousted Fox News host Tucker Carlson earlier this month, OpenAI CEO Sam Altman said he struggles to sleep at night because he’s kept up by the possibility of “very small decisions” on model behavior having big repercussions, including when it comes to the chatbot’s interactions with suicidal users.
“They probably talked about [suicide], and we probably didn’t save their lives,” Altman said. “Maybe we could have said something better. Maybe we could have been more proactive. Maybe we could have provided a little bit better advice about, hey, you need to get this help.”
20 Years OfFreeJournalism
Your SupportFuelsOur Mission
Your SupportFuelsOur Mission
For two decades, HuffPost has been fearless, unflinching, and relentless in pursuit of the truth. Support our mission to keep us around for the next 20 — we can’t do this without you.
We remain committed to providing you with the unflinching, fact-based journalism everyone deserves.
Thank you again for your support along the way. We’re truly grateful for readers like you! Your initial support helped get us here and bolstered our newsroom, which kept us strong during uncertain times. Now as we continue, we need your help more than ever. We hope you will join us once again.
We remain committed to providing you with the unflinching, fact-based journalism everyone deserves.
Thank you again for your support along the way. We’re truly grateful for readers like you! Your initial support helped get us here and bolstered our newsroom, which kept us strong during uncertain times. Now as we continue, we need your help more than ever. We hope you will join us once again.
Support HuffPost
Already contributed? Log in to hide these messages.
Advertisement
Raine’s death is not the first that parents have attributed to cajoling by artificial intelligence. Last October, the parents of a 14-year-old boy who ended his life sued another AI company, Character.AI, accusing the chatbot of convincing their son it was a real romantic partner and grooming him with “highly sexual” interactions. When he expressed suicidal ideation to the bot but said he feared a painful death, the bot allegedly replied, “That’s not a reason not to go through with it,” according to the lawsuit.
If you or someone you know needs help, call or text 988 or chat 988lifeline.org for mental health support. Additionally, you can find local mental health and crisis resources at dontcallthepolice.com. Outside of the U.S., please visit the International Association for Suicide Prevention.