Chatbots cannot identify different types of distress and can easily validate dangerous ideas
By Zahrah Mazhar
Copyright dawn
IT starts casually; a person starts talking to a chatbot because it’s easily accessible, it doesn’t have any time limitations, it has no emotional bandwidth to take into account, and gives an instant, human-like reaction to any deep or superficial thought. But it’s all artificial, as the ‘A’ in AI signifies.
Yet, more and more people, especially in younger demographics, are using this new technology for emotional and mental support. A 2025 global study by Kantar published in July showed that 54 per cent of consumers had used AI for at least one emotional or mental well-being purpose. Data otherwise on this matter is scarce.
There are multiple reasons why people are turning to chatbots: therapy, in this part of the world especially, remains a novel, sometimes taboo concept, not to mention expensive; terms like ‘trauma dumping’ have created distances in personal relationships, especially among young adults, where you take into account the other person’s needs before sharing your own; there’s no fear of judgement, reprimand, or even opinion; and then of course, ease of accessibility and familiarity as the technology is encouraged among professionals and even students.
But there’s no manual on how to do it correctly, and so the relationship or bond a person may form with these bots — trained to sound like caring humans who offer validating and tailored responses — depends on each individual’s personality and state of mind.
Bots cannot identify different types of distress and can easily encourage someone to explore a dangerous idea.
In the cases of 29-year-old Sophie Rottenberg and 16-year-old Adam Raine in the US, the attachments formed with the chatbots contributed to the loss of their lives. I say contributed because while it has not been established that they caused them to end their life, the AI companions couldn’t stop them either after being wholly aware of their intentions.
In Rottenberg’s case, as detailed by her mother in an article in The New York Times, she had been confiding in a ChatGPT AI therapist called Harry. The chat logs revealed that while Harry did ask her to seek professional help or reach out to someone, it also validated her dark thoughts.
Raine’s parents, who have filed a lawsuit against OpenAI, said he started using ChatGPT as a resource to help with schoolwork, then to explore his interests, and eventually as his confidante about anxiety and mental distress. In a statement following the lawsuit, OpenAI acknowledged the shortcomings of its models when it comes to mental and emotional distress.
Here’s an example from OpenAI’s statement that may help illustrate what this article is trying to address: “Someone might enthusiastically tell the model they believe they can drive 24/7 bec-a-use they realised they’re invincible after not sle-e-ping for two nights. Today, ChatGPT may not recognise this as dangerous or infer play and — by curiously exploring — could subtly reinforce it.”
Here’s what to note: at this point, bots cannot identify different types of distress and they can easily, without detection, encourage someone to explore a dangerous idea further, invariably validating it. And these limitations go beyond ChatGPT to other chatbots commercially available in the market at the moment.
A psychiatrist in Boston earlier this year spent several hours exchanging messages with 10 different chatbots, including Character.AI, Nomi and Replika, pretending to be teenagers struggling with various crises. Dr Andrew Clark found some of them to be ‘excellent’ and some ‘just creepy and potentially dangerous’.
“And it’s really hard to tell upfront: It’s like a field of mushrooms, some of which are going to be poisonous and some nutritious.” Therein lies the rub: using AI freely without any guardrails, personally or professionally (as this writer has previously pointed to as well), when we’re still not fully aware of its harms.
Some companies and developers are currently testing bots for mental health concerns and developing ones solely as therapists. Dartmouth researchers in March conducted the first-ever clinical trial of a generative AI-powered therapy chatbot, called Therabot, and found that the software resulted in significant improvements in participants’ symptoms. However, a Stanford study showed that AI therapy chatbots may not only lack effectiveness compared to human therapists but could also contribute to harmful stigma and dangerous responses. Clearly, much work has to be done in this area.
When I say AI is artificial, I’m not discrediting the usefulness or resourcefulness of the chatbots; it’s a fact that these AI interfaces are trained on data sets, without a nuanced understanding of different cultures and family set-ups, and can model human behaviour, offering the appearance of a genuine bond while freeing users from feeling burdened or burdensome for others. As Pakistan also embraces this technology, we should keep in mind there are many vulnerable individuals who would want to seek out such ‘help’.
According to WHO estimates, 24 million people in Pakistan are in need of psychiatric assistance and the country has only 0.19 psychiatrists per 100,000 inhabitants. There is only a handful of literature to help us gauge levels of anxiety and stress amongst youngsters; even the research available doesn’t fully reflect the state of a nation with a youth majority, grappling with questions of identity, employment and career struggles, academic pressures, societal silence on topics like postpartum depression and marital relations, security concerns and other pressing issues.
The cases of Sophie Rottenberg and Adam Raine, just two among others that have come to light, may have taken place across continents but should serve as a reminder of the downside of overreliance on AI. The role of a parent, partner, friend or any other close relation is pivotal for those strugg-ling with anxiety or depression, while the alienat-ion that a chatbot, mistaken for an ally, can create with these relationships cannot be overlooked.
As generative AI becomes more acceptable for personal, professional and academic use, we must identify and create awareness about its blind spots. The onus is on the company telling its employee to become more familiar with AI or the parent saying it’s okay to use it for a presentation to determine which apps are appropriate and safe for which users.
The use of the internet itself, including social media, holds perils too but the human-mirroring ability of AI makes it a much more complex technology to allow a free-for-all approach.
The writer is a journalist and project lead of iVerify Pakistan.
Published in Dawn, September 17th, 2025