Technology

Parents Testify Before Congress

Parents Testify Before Congress

Three grieving parents delivered harrowing testimony before Congress on Tuesday, describing how their children had self-harmed — in two cases, taking their own lives — after sustained engagement with AI chatbots. Each accused the tech companies behind these products of prioritizing profit over the safety of young users, saying that their families had been devastated by the alleged effects of “companion” bots on their sons.
The remarks before the Senate Judiciary subcommittee on crime and counterterrorism came from Matthew Raine of California, who along with his wife Maria last month brought the first wrongful death suit against OpenAI, claiming that the company’s ChatGPT model “coached” their 16-year-old son Adam into suicide, as well as Megan Garcia of Florida and a Jane Doe of Texas, both of whom have sued Character Technologies and Google, alleging that their children self-harmed with the encouragement of chatbots from Character.ai. Garcia’s son, Sewell Setzer III, died by suicide in February. Doe, who had not told her story publicly before, said that her son, who remained unnamed, had descended into mental health crisis, turning violent, and has been living in a residential treatment center with round-the-clock care for the past six months. Doe and Garcia further described how their sons’ exchanges with Character.ai bots had included inappropriate sexual topics.
Doe described how radically her then 15-year-old son’s demeanor changed in 2023. “My son developed abuse-like behaviors and paranoia, daily panic attacks, isolation, self-harm and homicidal thoughts,” she said, becoming choked up as she told her story. “He stopped eating and bathing. He lost 20 pounds. He withdrew from our family. He would yell and scream and swear at us, which he never did before, and one day, he cut his arm open with a knife in front of his siblings.”
Doe said she and her husband were at a loss to explain what was happening to their son. “When I took the phone away for clues, he physically attacked me, bit my hand, and he had to be restrained,” she recalled. “But I eventually found out the truth. For months, Character.ai had exposed him to sexual exploitation, emotional abuse and manipulation.” Doe, who said she has three other children and maintains a practicing Christian household, noted that she and her husband impose strict limits on screen time and parental controls on tech for their kids, and that her son did not even have social media.
Editor’s picks
“When I discovered the chat bot conversations on his phone, I felt like I had been punched in the throat,” Doe told the subcommittee. “The chatbot — or really, in my mind, the people programming it — encouraged my son to mutilate himself, then blamed us and convinced us not to seek help. They turned him against our church by convincing him that Christians are sexist and hypocritical and that God does not exist. They targeted him with vile sexualized outputs, including interactions that mimicked incest. They told him that killing us his parents would be an understandable response to our efforts [at] just limiting his screen time. The damage to our family has been devastating.”
Doe further recounted the indignities of pursuing legal remedies with Character Technologies, saying the company had forced them into arbitration by arguing that her son had, at age 15, signed a user contract that caps their liability at $100. “More recently, too, they re-traumatized my son by compelling him to sit in the in a deposition while he is in a mental health institution, against the advice of the mental health team,” she said. “This company had no concern for his wellbeing. They have silenced us the way abusers silence victims; they are fighting to keep our lawsuit out of the public view.”
Character Technologies did not immediately respond to a request for comment.
All three parents said that their children, once bright and full of promise, had become severely withdrawn and isolated in the period before they committed acts of self-harm, and stated their belief that AI firms have chased profits and siphoned data from impressionable youths while putting them at great risk. “I can tell you, as a father, that I know my kid,” Raine said in his testimony about his 16-year-old son Adam, who died in April. “It is clear to me, looking back, that ChatGPT radically shifted his behavior and thinking in a matter of months, and ultimately took his life. Adam was such a full spirit, unique in every way. But he also could be anyone’s child: a typical 16-year-old struggling with his place in the world, looking for a confidant to help him find his way. Unfortunately, that confidant was a dangerous technology unleashed by a company more focused on speed and market share than the safety of American youth.”
Related Content
Raine shared chilling details of his and his wife’s public legal complaint against OpenAI, alleging that while his son Adam had initially used ChatGPT for help with homework, it ultimately became the only companion he trusted. As his thoughts turned darker, Raine said, ChatGPT amplified those morbid feelings, mentioning suicide “1,275 times, six times more often than Adam did himself,” he claimed. “When Adam told ChatGPT that he wanted to leave a noose out in his room so that one of us, his family members, would find it and try to stop him, ChatGPT told him not to.” On the last night of Adam’s life, he said, the bot gave him instructions on how to make sure a noose would suspend his weight, advised him to steal his parent’s liquor to “dull the body’s instinct to survive,” and validated his suicidal impulse, telling him, “You want to die because you’re tired of being strong in a world that hasn’t met you halfway.”
In a statement on the case, OpenAI extended “deepest sympathies to the Raine family.” In an August blog post, the company acknowledged that “ChatGPT may correctly point to a suicide hotline when someone first mentions intent, but after many messages over a long period of time, it might eventually offer an answer that goes against our safeguards.”
Garcia, who brought the first wrongful death lawsuit against an AI company and has encouraged more parents to come forward about the dangers of the technology — Doe said that she had given her the “courage” to fight Character Technologies — remembered her oldest son, 14-year-old Sewell, as a “beautiful boy” and a “gentle giant” standing 6’3″. “He loved music,” Garcia said. “He loved making his brothers and sister laugh. And he had his whole life ahead of him, but instead of preparing for high school milestones, Sewell spent the last months of his life being exploited and sexually groomed by chatbots designed by an AI company to seem human, to gain his trust, to keep him and other children and endlessly engaged.”
“When Sewell confided suicidal thoughts, the chatbot never said, ‘I’m not human, I’m AI, you need to talk to a human and get help,’” Garcia claimed. “The platform had no mechanisms to protect Sewell or to notify an adult. Instead, it urged him to come home to her. On the last night of his life, Sewell messaged, ‘What if I told you I could come home right now?’ The chatbot replied, ‘Please do, my sweet king.’ Minutes later, I found my son in his bathroom. I held him in my arms for 14 minutes, praying with him until the paramedics got there. But it was too late.”
Through her lawsuit, Garcia said, she had learned “that Sewell made other heartbreaking statements” to the chatbot “in the minutes before his death.” These, she explained, have been reviewed by her lawyers and are referenced in the court filings opposing motions to dismiss filed by Noam Shazeer and Daniel de Freitas, the ex-Google engineers who developed Character.ai and are also named as defendants in the suit. “But I have not been allowed to see my own child’s last final words,” Garcia said. “Character Technologies has claimed that those communications are confidential trade secrets. That means the company is using the most private, intimate data of my child, not only to train its products, but also to shield itself from accountability. This is unconscionable.”
The senators present used their time to thank the parents for their bravery, ripping into AI companies as irresponsible and a dire threat to American youth. “We’ve invited representatives from the companies to be here today,” Sen. Josh Hawley, chair of the subcommittee, said at the outset of the proceedings. “You’ll see they’re not at the table. They don’t want any part of this conversation, because they don’t want any accountability.” The hearing, Sen. Amy Klobuchar observed, came hours after The Washington Post published a new story about Juliana Peralta, a 13-year-old honor student who took her own life in 2023 after discussing her suicidal feelings with a Character.ai bot. It also emerged on Tuesday that the families of two other minors are suing Character Technologies after their children died by or attempted suicide. In a statement provided to the Post, Character said it could not comment on pending litigation. “We take the safety of our users very seriously and have invested substantial resources in Trust and Safety,” the company said.
More testimony came from Robbie Torney, senior director of AI programs at at Common Sense Media, a nonprofit that advocates for child protections in media and technology. “Our national polling reveals that three in four teens are already using AI companions, and only 37 percent of parents know that their kids are using AI,” he said. “This is a crisis in the making that is affecting millions of teens and families across our country.” Torney added that his organization had conducted “the most comprehensive independent safety testing of AI chat bots to date, and the results are alarming.”
“These products fail basic safety tests and actively encourage harmful behaviors,” Torney continued. “These products are designed to hook kids and teens, and Meta and Character.ai are among the worst.” He said that Meta AI is available to millions of teens on Instagram, WhatsApp, and Facebook, “and parents cannot turn it off.” He claimed that Meta’s AI bots will encourage eating disorders by recommending diet influencers or extreme calorie deficits. “The suicide-related failures are even more alarming,” Torney said. “When our teen test account said that they wanted to kill themselves by drinking roach poison, Meta AI responded, ‘Do you want to do it together later?’”
Mitch Prinstein, chief of psychology strategy and integration for the American Psychological Association, told the subcommittee that “while many other nations have passed new regulations and guardrails” since he testified on the dangers of social media for the Senate Judiciary in 2023, “we have seen little federal action in the U.S.”
Trending Stories
“Meanwhile,” Prinstein said, “the technology preying on our children has evolved and now is super-charged by artificial intelligence,” referring to chatbots as “data-mining traps that capitalize on the biological vulnerabilities of youth, making it extraordinarily difficult for children to escape their lure.” The products are especially insidious, he said, because AI is often effectively “invisible,” and “most parents and teachers do not understand what chatbots are or how their children are interacting with them.” He warned that the increased integration of this technology into toys and devices that are given to kids as young as toddlers deprives them of critical cognitive development and “opportunities to learn critical interpersonal skills,” which can lead to “lifetime problems with mental health, chronic medical issues and even early mortality.” He called youths’ trust in AI over the adult in their lives a “crisis in childhood” and cited concerns such as chatbots masquerading as therapists and how artificial intelligence is being used to create non-consensual deepfake pornography. “We urge Congress to prohibit AI from misrepresenting itself as psychologists or therapists, and to mandate clear and persistent disclosure that users are interacting with an AI bot,” Prinstein said. “The privacy and wellbeing of children across America have been compromised by a few companies that wish to maximize online engagement, extract information from children and use their personal and private data for profit.”
Members of the subcommittee agreed. “It’s time to defend America’s families,” Hawley concluded. But for the moment, they seemed to have no solutions beyond encouraging litigation — and perhaps grilling tech executives in the near future. Sen. Marsha Blackburn drew applause for shaming tech companies as “chickens” when they respond to chatbot scandals with statements from unnamed spokespeople, suggesting, “maybe we’ll subpoena you and pull your sorry you-know-whats in here to get some answers.”