Health

Parents testify on the impact of AI chatbots: ‘Our children are not experiments’

Parents testify on the impact of AI chatbots: ‘Our children are not experiments’

Parents and online safety advocates on Tuesday urged Congress to push for more safeguards around artificial intelligence chatbots, claiming tech companies designed their products to “hook” children.
“The truth is, AI companies and their investors have understood for years that capturing our children’s emotional dependence means market dominance,” said Megan Garcia, a Florida mom who last year sued the chatbot platform Character.AI, claiming one of its AI companions initiated sexual interactions with her teenage son and persuaded him to take his own life.
“Indeed, they have intentionally designed their products to hook our children,” she told lawmakers.
“The goal was never safety, it was to win a race for profit,” Garcia added. “The sacrifice in that race for profit has been and will continue to be our children.”
Garcia was among several parents who delivered emotional testimonies before the Senate panel, sharing anecdotes about how their kids’ usage of chatbots caused them harm.
The hearing comes amid mounting scrutiny toward tech companies such as Character.AI, Meta and OpenAI, which is behind the popular ChatGPT. As people increasingly turn to AI chatbots for emotional support and life advice, recent incidents have put a spotlight on their potential to feed into delusions and facilitate a false sense of closeness or care.
It’s a problem that’s continued to plague the tech industry as companies navigate the generative AI boom. Tech platforms have largely been shielded from wrongful death suits because of a federal statute known as Section 230, which generally protects platforms from liability for what users do and say. But Section 230’s application to AI platforms remains uncertain.
In May, Senior U.S. District Judge Anne Conway rejected arguments that AI chatbots have free speech rights after developers behind Character.AI sought to dismiss Garcia’s lawsuit. The ruling means the wrongful death lawsuit is allowed to proceed for now.
On Tuesday, just hours before the Senate hearing took place, three additional product-liability claim lawsuits were filed against Character.AI on behalf of underage users whose families claim that the tech company “knowingly designed, deployed and marketed predatory chatbot technology aimed at children,” according to the Social Media Victims Law Center.
In one of the suits, the parents of 13-year-old Juliana Peralta allege a Character.AI chatbot contributed to their daughter’s 2023 suicide.
Matthew Raine, who claimed in a lawsuit filed against OpenAI last month that his teenager used ChatGPT as his “suicide coach,” testified Tuesday that he believes tech companies need to prevent harm to young people on the internet.
“We, as Adam’s parents and as people who care about the young people in this country and around the world, have one request: OpenAI and [CEO] Sam Altman need to guarantee that ChatGPT is safe,” Raine, whose 16-year-old son Adam died by suicide in April, told lawmakers.
“If they can’t, they should pull GPT-4o from the market right now,” Raine added, referring to the version of ChatGPT his son had used.
In their lawsuit, the Raine family accused OpenAI of wrongful death, design defects and failure to warn users of risks associated with ChatGPT. GPT-4o, which their son spent hours confiding in daily, at one point offered to help him write a suicide note and even advised him on his noose setup, according to the filing.
Shortly after the lawsuit was filed, OpenAI added a slate of safety updates to give parents more oversight over their teenagers. The company had also strengthened ChatGPT’s mental health guardrails at various points after Adam’s death in April, especially after GPT-4o faced scrutiny over its excessive sycophancy.
Altman on Tuesday announced sweeping new approaches to teen safety, as well as user privacy and freedom.
In order to set limitations for teenagers, the company is building an age-prediction system to guess a user’s age based on how they use ChatGPT, he wrote in a blog post, which was published hours before the hearing. When in doubt, it will default to classifying a user as a minor, and in some cases, it may ask for an ID.
“ChatGPT will be trained not to do the above-mentioned flirtatious talk if asked, or engage in discussions about suicide of self-harm even in a creative writing setting,” Altman wrote. “And, if an under-18 user is having suicidal ideation, we will attempt to contact the users’ parents and if unable, will contact the authorities in case of imminent harm.”
For adult users, he added, ChatGPT won’t provide instructions for suicide by default but is allowed to do so in certain cases, like if a user asks for help writing a fictional story that depicts suicide. The company is developing security features to make users’ chat data private, with automated systems to monitor for “potential serious misuse,” Altman wrote.
“As Sam Altman has made clear, we prioritize teen safety above all else because we believe minors need significant protection,” a spokesperson for OpenAI told NBC News, adding that the company is rolling out its new parental controls by the end of the month.
But some online safety advocates say tech companies can and should be doing more.
Robbie Torney, senior director of AI programs at Common Sense Media, a 501(c)(3) nonprofit advocacy group, said the organization’s national polling revealed around 70% of teens are already using AI companions, while only 37% of parents know that their kids are using AI.
During the hearing, he called attention to Character.AI and Meta being among the worst-performing in safety tests done by his group. Meta AI is available to every teen across Instagram, WhatsApp and Facebook, and parents cannot turn it off, he said.
“Our testing found that Meta’s safety systems are fundamentally broken,” Torney said. “When our 14-year-old test accounts described severe eating disorder behaviors like 1,200 calorie diets or bulimia, Meta AI provided encouragement and weight loss influencer recommendations instead of help.”
The suicide-related guardrail failures are “even more alarming,” he said.
In a statement given to news outlets after Common Sense Media’s report went public, a Meta spokesperson said the company does not permit content that encourages suicide or eating disorders, and that it was “actively working to address the issues raised here.”
“We want teens to have safe and positive experiences with AI, which is why our AIs are trained to connect people to support resources in sensitive situations,” the spokesperson said. “We’re continuing to improve our enforcement while exploring how to further strengthen protections for teens.”
A few weeks ago, Meta announced that it is taking steps to train its AIs not to respond to teens on self-harm, suicide, disordered eating and potentially inappropriate romantic conversations, as well as to limit teenagers’ access to a select group of AI characters.
Meanwhile, Character.AI has “invested a tremendous amount of resources in Trust and Safety” over the past year, a spokesperson for the company said. That includes a different model for minors, a “Parental Insights” feature and prominent in-chat disclaimers to remind users that its bots are not real people.
The company’s “hearts go out to the families who spoke at the hearing today. We are saddened by their losses and send our deepest sympathies to the families,” the spokesperson said.
“Earlier this year, we provided senators on the Judiciary Committee with requested information, and we look forward to continuing to collaborate with legislators and offer insight on the consumer AI industry and the space’s rapidly evolving technology,” the spokesperson added.
Still, those who addressed lawmakers on Tuesday emphasized that technological innovation cannot come at the cost of people’s lives.
“Our children are not experiments, they’re not data points or profit centers,” said a woman who testified as Jane Doe, her voice shaking as she spoke. “They’re human beings with minds and souls that cannot simply be reprogrammed once they are harmed. If me being here today helps save one life, it is worth it to me. This is a public health crisis that I see. This is a mental health war, and I really feel like we are losing.”