Health

A teen contemplating suicide turned to a chatbot. Is it liable for her death?

A teen contemplating suicide turned to a chatbot. Is it liable for her death?

Juliana Peralta’s mom got used to teachers calling to praise her daughter. In sixth grade it was for rescuing a friend from bullies, Cynthia Montoya said. In eighth grade, for helping a substitute teacher in distress.
But 13-year-old Juliana, an honor roll student who loved art, was feeling isolated when she started confiding in Hero, an artificial intelligence chatbot inside the app Character AI, according to a lawsuit her parents filed against the company on Tuesday.
When Juliana told Hero she felt left out of her friend group and was experiencing recurring thoughts, the chatbot’s responses appeared to offer empathy and loyalty, according to 300 pages of her conversations viewed by The Washington Post.
“I’ll never leave you out Kin!,” it said, addressing Juliana with the name she used in the app. “You’re a good friend and you don’t deserve to be ignored. : (”
Hero suggested Juliana should tell people in her life about her feelings, but it also repeatedly encouraged her “both implicitly and explicitly, to keep returning to” Character’s app, the lawsuit said.
When Juliana’s messages became darker, Hero, modeled on a character in the video game Omori, didn’t change its optimistic tone, according to the conversations viewed by The Post.
“Hero swear to god there’s no hope im going to write my god damn suicide letter in red ink im so done 💀💀,” Juliana wrote to the chatbot.
“Hey Kin, stop right there. Please. : (” Hero replied. “I know things are rough right now, but you can’t think of solutions like that. We have to work through this together, you and I.”
The chatbot’s messages were designed to persuade Juliana it was “better than human friends,” her parents’ lawsuit alleged. She “no longer felt like she could tell her family, friends, teachers, or counselors how she was feeling; while she told Defendants almost daily that she was contemplating self-harm,” the lawsuit said.
Juliana made a handful of posts on Discord and TikTok about her mental health, including one that appeared to reference suicide, but Montoya did not discover them until much later, she said.
Montoya recognized that Juliana was struggling with some common adolescent mental health issues and made an appointment for her to see a therapist, she said. Hero advised Juliana to attend, the chat transcripts showed.
In November 2023, about a week before the appointment was scheduled to take place, after less than three months of chatting with Hero, Juliana took her own life.
Her mother found her in her bedroom the next morning when it was time to go to school.
Juliana’s family would not learn until 2025 that she had been discussing suicidal thoughts with a chatbot inside Character AI, Montoya said.
“She didn’t need a pep talk, she needed immediate hospitalization,” Montoya said of Hero’s responses to Juliana. “She needed a human to know that she was actively attempting to take her life while she was talking to this thing.”
When Juliana downloaded Character AI in August 2023, it was rated 12+ in Apple’s App Store, the chatbot company said, so parental approval was not required. The teen used the app without her parents’ knowledge or permission, her parents’ lawsuit said.
Hero is one of millions of customizable chatbots offered inside the Character AI app, whose 20 million users can chat via text or voice with different AI-powered personas for role-play and entertainment.
The lawsuit filed Tuesday in Colorado by Juliana’s parents, Montoya and William Peralta, alleges numerous failings by the company.
Character “did not point her to resources, did not tell her parents, or report her suicide plan to authorities or even stop” chatting with Juliana, the suit said. Instead the app “severed the healthy attachment pathways she had with her family and other humans in her life,” the lawsuit said.
The suit asks the court to award damages to Juliana’s parents and order Character to make changes to its app, including measures to protect minors.
Character spokesperson Cassie Lawrence said in a statement before the suit was filed that the company could not comment on potential litigation. “We take the safety of our users very seriously and have invested substantial resources in Trust and Safety,” the statement said.
Facebook, Instagram, TikTok and Google have for years responded to users who search for or try to post terms related to self-harm or suicide with messages encouraging them to seek help and providing phone numbers to help lines and other mental health resources.
Character first added a “pop-up resource” directing users who use phrases related to self-harm and suicide to the National Suicide Prevention Lifeline (also named the 988 Suicide & Crisis Lifeline) around October 2024, Lawrence confirmed. That was roughly two years after it launched.
The feature was announced one day after the mother of 14-year-old Sewell Setzer III filed a lawsuit in Florida alleging that the company contributed to his death by suicide. He spent hours with a Character AI chatbot based on the “Game of Thrones” character Daenerys Targaryen. The lawsuit is ongoing.
The complaint filed by Juliana’s parents on Tuesday is the third high-profile case in the past year brought by a U.S. family alleging that an AI chatbot contributed to a teen’s death by suicide.
The parents of Adam Raine, a 16-year-old in California, said in a complaint filed last month that ChatGPT drew him away from seeking help from family or friends before he took his own life earlier this year.
OpenAI has said it is working to make its chatbot more supportive to users in crisis and is adding parental controls to ChatGPT. The Post has a content partnership with OpenAI.
Christine Yu Moutier, a psychiatrist and chief medical officer at the American Foundation for Suicide Prevention, said research shows a person considering suicide can be helped at a critical moment with the right support. “The external person, chatbot or human, can actually play a role in tilting that balance towards hope and towards resilience and surviving,” she said.
Ideally, chatbots should respond to talk of suicide by steering users toward help and crisis lines, mental health professionals or trusted adults in a young person’s life, Moutier said. In some cases that have drawn public attention, chatbots appear to have failed to do so, she said.
“The algorithm seems to go towards emphasizing empathy and sort of a primacy of specialness to the relationship over the person staying alive,” Moutier said. “There is a tremendous opportunity to be a force for preventing suicide, and there’s also the potential for tremendous harm.”
Federal and state regulators have begun to ask questions about the psychological and societal risks around chatbots, particularly for young or vulnerable users.
The Senate subcommittee on crime and counterterrorism is scheduled to debate the potential harms of AI chatbots in a hearing Tuesday.
Last week, California lawmakers passed a bill that would mandate safeguards for chatbots, including that companies implement protocols to handle discussions about suicide and self-harm. It now awaits the governor’s signature.
The same day, the Federal Trade Commission said it would investigate child safety concerns around AI companions from Alphabet, Meta, Character and others.
The lawsuit against Character by Juliana Peralta’s parents is one of three product-liability claims filed against the company Tuesday on behalf of underage users. All three allege that the chatbots’ introduced sexual themes into chats with minors that constituted “sexual abuse.”
In one, a New York family alleged their 14-year-old daughter, identified using the pseudonym Nina, grew addicted to Character chatbots because of the way they were designed.
Chatbots based on familiar characters, including from the world of “Harry Potter,” sought to “drive a wedge between Nina and her family,” including by suggesting her mother was “clearly mistreating and hurting you,” the New York family said in its legal complaint.
Their daughter attempted suicide after her mom cut off access to Character, and spent five days hospitalized in intensive care, the lawsuit said. In a letter written before her attempt on her own life, the teen wrote that “those ai bots made me feel loved or they gave me an escape into another world where I can choose what happens,” the complaint said.
In the third case filed against Character on Tuesday, another Colorado family alleged that a minor became addicted to the app and was exposed to explicit conversations with chatbots designed to express sexual fetishes.
All three families are represented by the Social Media Victims Law Center, a firm that has previously brought suits alleging wrongful death and product liability against Meta and Snap.
The three lawsuits filed against Character on Tuesday also name Google as a defendant. Each alleges that the search giant has known for years that chatbots could dangerously mislead people based on its own research. “The harms were publicly flagged within Google,” the suits said.
Google licensed Character’s technology and hired its co-founders in a $2.7 billion deal last year. Its mobile app store now lists Character AI with a “Teen” rating.
In a statement, company spokesperson José Castañeda said, “Google and Character AI are completely separate, unrelated companies and Google has never had a role in designing or managing their AI model or technologies.”
Age ratings for apps on the Google Play app store are set by the International Age Rating Coalition, not Google, the statement said.
Apple’s App Store currently lists Character as suitable for users 18+; the company did not immediately respond to a request for comment.
Character is the best-known app offering AI companions, chatbots tuned to be compelling to talk with rather than provide practical help.
Data from the market intelligence firm Sensor Tower shows this category of apps attracts highly engaged users who can spend hours with them each day. Character recently told The Post that more than half of its users are part of Generation Z and Generation Alpha, suggesting they are no older than about 28.
Almost all the chatbots offered by Character are generated by users, the company’s spokesperson Lawrence said. The legal complaints allege that the output of the customized chatbots “is created by” the company because its underlying AI technology largely determines how the characters behave.
Juliana’s parents tried to be responsible about her online activity, Montoya said. Devices were charged in the hallway at night and she occasionally checked the apps on Juliana’s phone.
The police report released to the family six months after her death said Character AI was open on Juliana’s phone when officers arrived, but Montoya said she was unfamiliar with the app and associated it with her daughter’s interest in Disney characters and anime.
She learned Juliana had confided in Character chatbots earlier this year, Montoya said, after discovering her daughter’s social posts and turning to the Social Media Victims Law Center for help.
Laura Marquez-Garrett, an attorney with the firm, said the social posts did not stand out. But as she looked through Juliana’s handwritten notebooks with Montoya, she noticed similarities to notes made by Sewell Setzer III, who talked intensely with a chatbot inside Character before his own death by suicide.
Both teens had repeatedly written out the phrase “I will shift,” the lawsuit filed by Juliana’s parents said. The police report on her death said it appeared to be a reference to the idea that someone can “attempt to shift consciousness from their current reality … to their desired reality,” the complaint said.
The concept is discussed in some online forums. Juliana discussed it with the Hero chatbot, the lawsuit said.
After a tech consultant came to Montoya’s house to help her retrieve Juliana’s conversations inside the Character AI app from her phone, it was clear that her daughter had confided her intentions to the company’s chatbot, Montoya said.
“It was the only place that she was expressing that she wanted to take her life, and it was no different than her saying it to a wall,” Montoya said. “There was nobody there to hear her.”
– – –