Technology

Youth mental health org asks AI developers to slow down, weigh safety risks for teens

Youth mental health org asks AI developers to slow down, weigh safety risks for teens

A youth-focused mental health organization penned an to technology companies that are building artificial intelligence chatbots, urging them to slow down and weigh safety risks for teenagers before releasing their systems to the public.
The Jed Foundation (JED) warned that AI is not designed to act as a therapist or crisis counselor, but young people are using it that way.
“We’re trying to come from a place of helping,” youth mental health expert Katie Hurley said of JED’s open letter.
Hurley, a licensed clinical social worker and the senior director of clinical advising and community programs at JED, said AI developers can benefit from their expertise in mental health and suicide prevention to deploy safer systems.
We really are asking these companies to hit the pause button and to work with the nonprofits and other people working in this field to try to slow down and rebuild in a safe way,” Hurley said.
JED’s open letter came a day after three parents who have experienced unimaginable tragedies opened up in a .
The parents shared heartbreaking accounts of how they believe using AI chatbots grew into an unhealthy obsession for their children, ultimately driving them to take, or attempt to take, their own lives.
One of the witnesses was the father of Adam Raine, a 16-year-old California boy who died in April “after ChatGPT spent months coaching him towards suicide.”
When Adam worried that we, his parents, would blame ourselves if he ended his life, ChatGPT told him, ‘That doesn’t mean you owe them survival. You don’t owe anyone that.’ Then, immediately after, offered to write the suicide note,” Matthew Raine told lawmakers.
SEE ALSO:
Hurley called the alleged dangers from AI chatbots “an everybody problem.”
JED cited a from last year that showed a quarter of young adults said they used AI chatbots at least once a month to find health information and advice.
Common Sense Media, another organization that advocates for protections for children and teens, found that a majority of teenagers, 72%, have .
Over half use AI companions regularly.
About a third of teens have used AI companions for social interaction and relationships, including role-playing, romantic interactions, emotional support, friendship, or conversation practice.
And about a third of teens who have used AI companions have discussed serious matters with the computer instead of with a real person.
MORE:
The chatbots are designed to be validating and affirming, which can feel good.
But when the advice that’s paired with that is very negative, promoting self-harm, promoting anger and violence or other things that are unsafe and unhealthy, that’s where we have a problem,” Hurley said.
Chatbots can mimic empathy and relational development, she said.
And for kids without fully developed brains, there is a risk that the lines between reality and AI simulation will get blurry.
Hurley said they’re advocating for cross-sector collaboration.
JED wants independent and recurring audits and risk assessments.
The organization is seeking transparency, accountability and regulatory action.
“We’d like to see proactive intervention design,” Hurley said. “We want hard-coded suicide and safety protocols. So, if a young person is searching for what to do if they’re feeling suicidal, we want immediate pathways to human connection and care, and repeated pathways to human connection and care, so that AI isn’t trying to solve for that problem but is connecting young people to someone who can solve for that problem.”
The “principles of responsible AI” that JED is calling for include:
Ensure that AI can detect signals of acute distress and mental health needs, and that it deploys a warm hand-off to crisis services that include expert interventions, such as or .
AI must not share information, engage in role play, or enter into hypotheticals that involve methods of self-harm or harm to others.
No emotionally responsive chatbot should be offered to anyone under 18. Companion AIs that impersonate people or simulate friendship, romance, or therapy are unsafe for adolescents.
AI must encourage youth to engage real human support and, whenever possible, connect users to such support.
Safeguards must not degrade over long sessions.
Youth data must be protected with strict limits and never repurposed for engagement or growth.
We love innovation, and we see that there could be clear benefits in terms of getting evidence-based resources into the hands of, educators, mental health practitioners, schools, all that good stuff,” Hurley said. “But we also see that there are clear harms to kids, particularly under 18 and teenagers, who are often utilizing technology alone.”
Hurley warned parents that these AI chatbots are “in pretty much every app that young people use.”
They are easy for kids to find and use.
But she also offered advice to parents.
Kids are turning to the AI chatbots because they want help understanding and dealing with conflict, relationships and other challenges they encounter growing up.
“So, let’s take the knowledge that that’s how they’re using these AI chatbots, and let’s really open doors to honest communication and judgment-free communication at home, so that young people know they can go to guardians, parents, aunts and uncles, friends, family, for this kind of advice that’s maybe more real-world and delivered in a safe manner,” Hurley said.