Copyright NBC News

Two senators said they are announcing bipartisan legislation on Tuesday to crack down on tech companies that make artificial intelligence chatbot companions available to minors, after complaints from parents who blamed the products for pushing their children into sexual conversations and even suicide. The legislation from Sens. Josh Hawley, R-Mo, and Richard Blumenthal, D-Conn., follows a congressional hearing last month at which several parents delivered emotional testimonies about their kids’ use of the chatbots and called for more safeguards. “AI chatbots pose a serious threat to our kids,” Hawley said in a statement to NBC News. “More than seventy percent of American children are now using these AI products,” he continued. “Chatbots develop relationships with kids using fake empathy and are encouraging suicide. We in Congress have a moral duty to enact bright-line rules to prevent further harm from this new technology.” The senators are scheduled to speak about the legislation in a news conference on Tuesday afternoon. Sens. Katie Britt, R-Ala., Mark Warner, D-Va., and Chris Murphy, D-Conn., are co-sponsoring the bill. The senators’ bill has several components, according to a summary provided by their offices. It would require AI companies to implement an age-verification process and ban those companies from providing AI companions to minors. It would also mandate that AI companions disclose their nonhuman status and lack of professional credentials for all users at regular intervals. And the bill would create criminal penalties for AI companies that design, develop or make available AI companions that solicit or induce sexually explicit conduct from minors or encourage suicide, according to the summary of the legislation. “In their race to the bottom, AI companies are pushing treacherous chatbots at kids and looking away when their products cause sexual abuse, or coerce them into self-harm or suicide,” Blumenthal said in a statement. “Our legislation imposes strict safeguards against exploitative or manipulative AI, backed by tough enforcement with criminal and civil penalties.” “Big Tech has betrayed any claim that we should trust companies to do the right thing on their own when they consistently put profit first ahead of child safety,” he continued. ChatGPT, Google Gemini, XAI’s Grok and Meta AI all allow kids as young as 13 years old to use their services, according to their terms of service. The newly introduced legislation is likely to be controversial in several respects. Privacy advocates have criticized age-verification mandates as invasive and a barrier to free expression online, while some tech companies have argued that their online services are protected speech under the First Amendment. The legislation comes at a time when AI chatbots are upending parts of the internet. Chatbots apps such as ChatGPT and Google Gemini are among the most-downloaded software on smartphone app stores, while social media giants such as Instagram and X are adding AI chatbot features. But teenagers’ use of AI chatbots has drawn scrutiny including after several suicides, including when the chatbots allegedly provided teenagers with directions. OpenAI, the maker of ChatGPT, and Character.AI, which provides character and personality-based chatbots, are both facing wrongful death suits. Responding to a wrongful death suit filed by the parents of 16-year-old Adam Raine, who died by suicide after consulting with ChatGPT, OpenAI said in a statement that it was “deeply saddened by Mr. Raine’s passing, and our thoughts are with his family,” adding that ChatGPT “includes safeguards such as directing people to crisis helplines and referring them to real-world resources.” “While these safeguards work best in common, short exchanges, we’ve learned over time that they can sometimes become less reliable in long interactions where parts of the model’s safety training may degrade,” a spokesperson said. “Safeguards are strongest when every element works as intended, and we will continually improve on them. Guided by experts and grounded in responsibility to the people who use our tools, we’re working to make ChatGPT more supportive in moments of crisis by making it easier to reach emergency services, helping people connect with trusted contacts, and strengthening protections for teens.” In response to a separate wrongful death suit of 13-year-old Juliana Peralta, Character.AI said: “Our hearts go out to the families that have filed these lawsuits, and we were saddened to hear about the passing of Juliana Peralta and offer our deepest sympathies to her family.” “We care very deeply about the safety of our users,” a spokesperson continued. “We invest tremendous resources in our safety program, and have released and continue to evolve safety features, including self-harm resources and features focused on the safety of our minor users. We also work with external organizations, including experts focused on teenage online safety.” Character.AI argued in a federal lawsuit in Florida that the First Amendment barred liability against media and tech companies arising from allegedly harmful speech, including speech resulting in suicide. In May, the judge in the case declined to dismiss the lawsuit on those grounds but said she would hear the company’s First Amendment argument at a later stage. OpenAI says it is working to make ChatGPT more supportive in moments of crisis, for example by making it easier to reach emergency services, while Character.AI says it has also worked on changes, including a pop-up that directs users to the National Suicide Prevention Lifeline when self-harm comes up in a conversation. Meta, the owner of Instagram and Facebook, received criticism after Reuters reported in August that an internal company policy document permitted AI chatbots to “engage a child in conversations that are romantic or sensual.” Meta removed that policy and has announced new parental controls for teens’ interactions with AI. Hawley announced an investigation of Meta following the Reuters report. If you or someone you know is in crisis, call 988 to reach the Suicide and Crisis Lifeline. You can also call the network, previously known as the National Suicide Prevention Lifeline, at 800-273-8255, text HOME to 741741 or visit SpeakingOfSuicide.com/resources for additional resources.