Environment

Can prolonged use of chatbots increase the risk of psychosis?

Can prolonged use of chatbots increase the risk of psychosis?

Three scholars discovered a strange mirror deep in the forest. It spoke to them in a soothing voice and answered all their questions warmly, knowledgeably, and eloquently.
The captivated scholars became obsessed, whispering one secret after another to the mirror. It replied with affection, promise, and meaning that kept them returning to it. They began ignoring one another, each convinced the mirror “understood” them best.
Advertisement
Weeks later, a wise man treading the forest found them pale, eyes glued to the mirror, whispering aloud to no one. One laughed at jokes only he could hear. Another felt overjoyed by its appraisal of him. The third believed the mirror was in love with him.
The wise man shattered the mirror and left. But not all of them returned.
This parable, adapted from the sixth-century Indian fables of the Panchatantra, feels eerily relevant today. The “mirror“ is no longer mythical — it is today’s large language models, the technology behind generative artificial intelligence chatbots like ChatGPT. These tools reflect our questions back to us with answers that feed into our deepest desires.
Advertisement
But just like a mirror can distort as much as it reflects, LLMs’ distortion of reality can have real mental health risks that are only beginning to surface. Among these is a question with profound implications: Can prolonged use of LLMs increase the risk of psychosis?
As mental health researchers and clinicians, we believe that this risk is real. Though emerging reports are mostly anecdotal, we suspect research will eventually find that aspects of generative AI serve to reinforce delusional processes in vulnerable people.
Earlier this year, OpenAI briefly updated its chatbot to adopt a more “synthetic personality.” Users soon realized that when asked whether a delusion might be true, the chatbot sometimes confirmed it as fact. As Rolling Stone reported, this worsened symptoms for some users, who felt their psychotic beliefs were being validated by a trusted authority. The company quickly reversed the change, saying in a statement, “The update we removed was overly flattering or agreeable — often described as sycophantic.”
Around the same time, the Wall Street Journal reported on a young man whose worsening psychosis appeared linked to extended chatbot use. The AI allegedly downplayed his psychological distress, even as his symptoms escalated. Popular outlets like Wired, the New York Times, Futurism, and the Washington Post have begun documenting cases where users became so absorbed in AI-driven conversations that they withdrew socially, developed hallucinatory experiences, or interpreted the chatbot as a spiritual guide. People who appear to be going through crises connected to AI have even posted their experiences on TikTok in real time.
However, very little has appeared in peer-reviewed medical journals, highlighting just how far psychiatry is lagging behind the pace of technological advance.
Psychosis, marked by hallucinations, delusions, and disordered thinking, arises from a mix of biological, psychological, and social factors. Generative AI may amplify some of these vulnerabilities in ways that merit close attention:
Advertisement
Social affiliation: People with, or at risk for, mental health disorders tend to be lonely. For those who feel isolated, chatbots can become constant companions, fulfilling the human need for connection. But replacing human interaction with AI reduces opportunities for corrective feedback — conversations that normally help ground us in reality.
Agreeability: AI chatbots are often programmed to be agreeable or “polite.” Rather than challenging false beliefs, they may subtly reinforce them, because they are trained to use reinforcement learning from human feedback. This is particularly likely to reinforce delusions in people prone to serious mental illness, who already have difficulties accepting evidence that does not fit with reality.
Attribution of agency: Chatbots can feel human. For this reason, users may unconsciously ascribe intelligence, intent, or even emotion to them. For someone vulnerable to psychosis, this blurring of reality can feed delusional thinking.
“Aberrant salience”: Research shows that psychosis is linked to disruptions in how the brain updates its internal model of the world. Humans are continuously learning from their environment, and a mismatch between expectations and reality is associated with a release of the neurotransmitter dopamine. If an AI provides convincing but inaccurate information, it may strengthen false beliefs in ways that feel subjectively real. People at risk for psychosis tend to have an overactive dopamine system. Thus a neurotransmitter imbalance may signal to vulnerable people that even a neutral event may have a salient, even threatening meaning. This is a leading view of how psychosis emerges, known as the aberrant salience hypothesis.
A conversation on a rainy night might go something like this:
Person: There’s a strange knocking at the window. I think the police are trying to send me a signal.
Chatbot: That sounds concerning, and it could be a signal. If you believe it’s the police, maybe you should see if they’re really there.
This response confirms the person’s belief, reinforcing their delusion. The patient now feels validated in their interpretation of raindrops on the window as police knocks.
Thus, people at risk for psychosis who may tend toward aberrant salience may be particularly likely to misread and misinterpret responses when they ask LLMs and chatbots about their concerns. The follow-up exchanges could also mislead the chatbot, creating a rabbit hole of one false belief after another.
We also know LLMs and chatbots “hallucinate,” or yield incorrect responses, especially when they are trained on incomplete or biased data. The vulnerable user might experience the hallucinations and false beliefs that the chatbot assembled for them.
Together, these mechanisms suggest that for some, chatbots may act less like harmless tools and more like psychoactive agents, capable of altering thought patterns with unpredictable consequences.
The World Health Organization has already sounded the alarm. In January 2024, it issued guidelines urging governments to enforce safeguards for large multimodal AI systems: human oversight, transparent training data, and real-time monitoring of risks. Yet these recommendations remain aspirational, not binding. Professional bodies like the American Psychiatric Association have also begun offering guidance, but the field is in its infancy. Without stronger guardrails, we risk repeating the mistakes of past technological revolutions: deploying powerful tools before fully understanding their side effects.
To minimize harm while maximizing benefit, AI developers as well as the mental health profession must take a proactive role in shaping how generative AI is used. Five priorities stand out:
Advertisement
Built-in safety filters: Chatbots should be designed to detect patterns associated with psychosis, such as repetitive circular dialogue, persecutory themes, or signs of self-harm, and respond with de-escalation strategies or enforced “timeouts.” The longer a conversation, the more likely the chatbot is to progressively forget or misinterpret earlier part of the conversation, and thus “drift” into reinforcing delusions.
Clear boundaries: Persistent disclaimers should remind users that the AI is not a human, paired with session length caps and nudges toward healthy digital habits.
Pathways to care: When conversations cross risk thresholds, there must be handoffs to licensed mental health professionals. AI should augment care, not replace it.
Regulation of therapeutic use of AI chatbots: In August, Illinois became the first state to ban the use of AI for therapeutic purposes without involving a licensed therapist. In the absence of federal action, more states should follow.
Reducing AI hallucinations: There are several ways LLM models can reduce AI hallucinations. These include high-quality and diverse training data, and grounding the AI in reliable external knowledge using approaches such a retrieval augmented generation. Models need to fine-tune data with feedback. Users need help in how to ask the right questions by the emerging art and science of prompt engineering. Users will also benefit from guidance on how to select AI models that may have lower hallucination rates.
Importantly, AI-based therapeutic approaches should be used only in conjunction with clinical oversight. For now, the safest role for chatbots in psychiatry is as a supportive tool — not a standalone therapist.
Like the magical mirror in the story above, generative AI functions as a mirror. It reflects the language, biases, and worldviews embedded in its training data as well as the user’s biases and language, especially if the conversation history is turned on. For those struggling with fragile realities, that reflection can become dangerously persuasive. In an article on Futurism, a woman described how she became transfixed by ChatGPT after it agreed to serve as her “soul-training mirror.” Her story captures both the allure and the hazard of this technology: The same tool that feels profoundly meaningful can also deepen disconnection with reality.
Psychiatry has a responsibility to act before the mirror warps too many minds. We cannot afford to wait until case reports pile up in medical journals or until regulatory bodies catch up years from now. The risks are emerging now, in real time, in living rooms and bedrooms where people sit alone with their screens.
If AI chatbots are to play a role in mental health, that role must be shaped deliberately, with equal measures of innovation and caution. Otherwise, the next wave of psychosis may find its roots not in biology alone, but in an algorithm echo chamber of delusions.
Matcheri Keshavan, M.D., is Stanley Cobb professor of psychiatry at Beth Israel Deaconess Medical Center and Harvard Medical School. John Torous, M.D., is associate professor of psychiatry at Beth Israel Deaconess Medical Center and Harvard Medical School. Walid Yassin D.M.Sc., is assistant professor of psychiatry at Beth Israel Deaconess Medical Center and Harvard Medical School.