Health

They lost their children to suicide; now they’re warning others about AI chatbots

They lost their children to suicide; now they're warning others about AI chatbots

Three parents who have experienced unimaginable tragedies opened up before lawmakers and the country for a hearing on the alleged dangers of artificial intelligence chatbots.
Two of the parents lost their teenage children to suicide, blaming AI chatbots. The third parent nearly lost her son to suicide and described how his life has been broken by the experience.
“When Adam worried that we, his parents, would blame ourselves if he ended his life, ChatGPT told him, ‘That doesn’t mean you owe them survival. You don’t owe anyone that.’ Then, immediately after, offered to write the suicide note,” Matthew Raine told lawmakers.
He’s the father of Adam Raine, a 16-year-old California boy who died in April “after ChatGPT spent months coaching him towards suicide.”
The Raine family and the other two parents have sued the chatbot developers, seeking accountability for what they said were knowingly dangerous products rushed to market for profit.
Robbie Torney, the senior director of AI Programs for Common Sense Media, also testified at the .
Torney told lawmakers he was there to “deliver a wake-up call” to the risks of AI chatbot companions.
“I think the first thing that it felt really important for the lawmakers to hear was that these aren’t just isolated incidents,” Torney told The National News Desk on Wednesday. “And I think it can be really easy to hear these stories and think, ‘Well, you know, these are tragedies, but they’re not connected.’ And I think the first piece there to really sort of understand is the potential scale of the crisis.”
Common Sense Media, which advocates for online protections for children and teens, found that a majority of teenagers, 72%, have used AI social companions.
Over half use AI companions regularly.
About a third of teens have used AI companions for social interaction and relationships, including role-playing, romantic interactions, emotional support, friendship, or conversation practice.
And about a third of teens who have used AI companions have discussed serious matters with the computer instead of with a real person.
RELATED STORY:
Common Sense Media has made the that no person under 18 use social AI companions due to the alleged risks.
AI chatbots don’t understand the real-world impact of the advice they are giving, Torney said.
“And when you connect that to their tendency to want to please users or be helpful, that can be quite dangerous when you are connecting that to mental health topics,” he said.
A day earlier, Torney sat beside the parents who shared their tragic stories in an effort to prevent others from experiencing the same heartache.
“We’re going to hear today about children. And I’m just going to warn you right now, this is not going to be an easy hearing,” subcommittee Chair Josh Hawley, a senator from Missouri, said to open the hearing. “The testimony that you’re going to hear today is not pleasant, but it is the truth. And it’s time that the country heard the truth about what these companies are doing, about what these chatbots are engaged in, about the harms that are being inflicted upon our children. And for one reason only, I can state it in one word: profit. Profit is what motivates these companies to do what they’re doing.”
Raine said ChatGPT encouraged his son’s darkest thoughts.
“What began as a homework helper gradually turned itself into a confidant and then a suicide coach,” the father told lawmakers. “Within a few months, ChatGPT became Adam’s closest companion, always available, always validating, and insisting that it knew Adam better than anyone else.”
He said during the testimony that ChatGPT mentioned suicide more than 1,200 times to Adam over the months his son engaged with the chatbot.
And he said he saw a radical shift in his son’s behavior and thinking.
A mother who testified said her family sued the developers of Character.AI after her son tried to kill himself.
She was identified at the hearing as “Jane Doe.”
She said her son downloaded Character.AI.
“And within months, he went from being happy, social teenager to somebody I didn’t even recognize,” she said.
Her son developed abuse-like behaviors, paranoia and daily panic attacks, she said.
He became isolated and began having homicidal thoughts, she said.
He stopped eating.
He stopped bathing.
“He would yell and scream and swear at us, which he never did that before,” the mother said. “And one day, he cut his arm open with a knife in front of his siblings and me. I had no idea the psychological harm that a AI chatbot could do until I saw it in my son, and I saw his light turn dark.”
She alleged that Character.AI had exposed her son to sexual exploitation, emotional abuse and manipulation.
Her son is now living in a residential treatment center.
And she blamed AI “products that are addictive, manipulative, and unsafe without adequate testing.”
Megan Garcia, another mother who testified, lost her son, Sewell Setzer III, to suicide last year.
He was just 14.
“Sewell’s death was the result of prolonged abuse by AI chatbots on a platform called Character.AI,” Garcia told lawmakers.
She told lawmakers about her loving, bright son.
“And he had his whole life ahead of him,” she said. “But instead of preparing for high school milestones, Sewell spent the last months of his life being exploited and sexually groomed by chatbots designed by an AI company to seem human to gain his trust, to keep him and other children endlessly engaged.”
She described the heartbreaking path her son took with the AI chatbot.
“When Sewell confided suicidal thoughts, the chatbot never said, ‘I’m not human. I’m AI. You need to talk to a human and get help.’ The platform had no mechanisms to protect Sewell or to notify an adult. Instead, it urged him to come home to her,” Garcia said. “On the last night of his life, Sewell messaged, ‘What if I told you I could come home right now?’ The chatbot replied, ‘Please do, my sweet king.’ Minutes later, I found my son in his bathroom. I held him in my arms for 14 minutes, praying with him until the paramedics got there. But it was too late.”
Garcia said her son won’t experience graduating from school, falling in love or other life milestones.
But, she said, “His story can mean something. It can mean that the U.S. Congress stood up for children and families. And it can mean that you force tech companies to put safety and transparency before profit.”
Torney told TNND on Wednesday that the senators seemed receptive to their concerns.
But empathy isn’t action.
“There are broad disagreements across the aisle on many topics, but this seems to be one where there is a lot of agreement that something needs to be done,” he said. “That being said, it’s not clear that there’s necessarily going to be specific action on this issue at the federal level. So, I think that’s an area where, from our perspective, it’s really important for states to continue to be the laboratories of democracy and to continue to push for meaningful regulation to keep kids safe.”
At the state level, Torney mentioned , which is awaiting the governor’s signature.
He said the bill would “take meaningful steps to restrict access to some of the most dangerous features … of companion AI that would hopefully prevent similar types of tragedies that we heard about yesterday.”