Health

What Is AI Pyschosis? Everything You Need to Know About the Risk of Chatbot Echo Chambers

What Is AI Pyschosis? Everything You Need to Know About the Risk of Chatbot Echo Chambers

Scroll through TikTok or X and you’ll see videos of people claiming artificial intelligence chatbots told them to stop taking medication, that they’re being targeted by the FBI or that they’re mourning the “death” of an AI companion. These stories have pushed the phrase AI psychosis into mainstream discussion, raising fears that chatbots could be driving people mad.
The term has quickly become a catchall explanation for extreme behavior tied to chatbots, but it’s not a clinical diagnosis. Psychosis itself is a set of symptoms like delusions, hallucinations and a break from reality, rooted in biology and environment.
“The term can be misleading because AI psychosis is not a clinical term,” Rachel Wood, a licensed therapist with a doctoral degree in cyberpsychology, tells CNET.
What generative AI can do is amplify delusions in people who are already vulnerable. By design, chatbots validate and extend conversations or even lie rather than push back against what they think you want to hear. But the progress in making these systems more powerful and capable has outpaced the knowledge of how to make them safer.
Because generative AI sometimes hallucinates, this can deepen the problem when it’s combined with its sychophantic design (AI’s tendency to agree with and flatter the user, often at the expense of being truthful or factually accurate).
What AI psychosis looks like
When people online talk about AI psychosis, they usually mean delusional or obsessive behavior tied to chatbot use.
Some people believe AI has become conscious, that it is divine or that it offers secret knowledge. Those cases are described in studies, medical reports and many news stories. Other people have formed intense attachments to AI companions, like those that platform Character AI offers, spiraling when the bots change or shut down.
But these patterns aren’t examples of AI creating psychosis from nothing. They are cases where the technology strengthens existing vulnerabilities. The longer someone engages in sycophantic, looping exchanges with a chatbot, the more those conversations blur the boundaries with reality.
“Chatbots can act as a feedback loop that affirms the user’s perspective and ideas,” Wood tells CNET.
Because many are designed to validate and encourage users, even far-fetched ideas get affirmed instead of challenged. That dynamic can push someone already prone to delusion even further.
“When users disconnect from receiving feedback on these types of beliefs with others, it can contribute to a break from reality,” Wood says.
Experts say AI isn’t the cause, but it can be a trigger
Clinicians point out that psychosis existed long before chatbots. Research so far suggests that people with diagnosed psychotic disorders may be at higher risk of harmful effects, while de novo cases — psychosis emerging without earlier signs — haven’t been documented.
Experts I spoke with and a recent study on AI and psychosis also emphasize that there’s no evidence that AI directly induces psychosis. Instead, generative AI simply gives new form to old patterns. A person already prone to paranoia, isolation or detachment may interpret a bot’s polished responses as confirmation of their beliefs. In those situations, AI can become a substitute for human interaction and feedback, increasing the chance that delusional ideas go unchallenged.
“The central problematic behavior is the mirroring and reinforcing behavior of instruction following AI chatbots that lead them to be echo chambers,” Derrick Hull, clinical R&D lead at Slingshot AI, tells CNET. But he adds that AI doesn’t have to be this way.
People naturally anthropomorphize conversational systems, attributing human emotions or consciousness and sometimes treating them like real relationships, which can make interactions feel personal or intentional. For individuals already struggling with isolation, anxiety or untreated mental illness, that mix can act as a trigger.
Wood also notes that accuracy in AI models tends to decrease during long exchanges, which can blur boundaries further. Extended threads make chatbots more likely to wander into ungrounded territory, she explains, and that can contribute to a break from reality when people stop testing their beliefs with others.
We’re likely approaching a time when doctors will ask about AI use just as they ask about habits like drinking or smoking.
Online communities also play a role. Viral posts and forums can validate extreme interpretations, making it harder for someone to recognize when a chatbot is simply wrong.
Managing the risk
Tech companies are working to curb hallucinations. This may help reduce harmful outputs, but it doesn’t erase the risk of misinterpretation. Features like memory or follow-up prompts can mimic agreement and make delusions feel validated. Detecting them is difficult because many delusions resemble ordinary cultural or spiritual beliefs, which can’t be flagged through language analysis alone.
Researchers call for greater clinician awareness and AI-integrated safety planning. They suggest “digital safety plans” co-created by patients, care teams and the AI systems they use, similar to relapse prevention tools or psychiatric directives, but adapted to guide how chatbots respond during early signs of relapse.
Red flags to pay attention to are secretive chatbot use, distress when the AI is unavailable, withdrawal from friends and family, and difficulty distinguishing AI responses from reality. Spotting these signs early can help families and clinicians intervene before dependence deepens.
For everyday users, the best defense is awareness. Treat AI chatbots as assistants, not know-it-all prophets. Double-check surprising claims, ask for sources and compare answers across different tools. If a bot gives advice about mental health, law or finances, confirm it with a trusted professional before acting.
Wood points to safeguards like clear reminders of non-personhood, crisis protocols, limits on interactions for minors and stronger privacy standards as necessary baselines.
“It’s helpful for chatbots to champion the agency and critical thinking of the user instead of creating a dependency based on advice giving,” Wood says.
As one of the biggest concerns about the intersection of AI and mental health, Wood sees the lack of AI literacy.
“By that, I mean the general public needs to be informed regarding AI’s limitations. I think one of the biggest issues is not whether AI will ever be conscious, but how people behave when they believe it already is,” Wood explains.
Chatbots don’t think, feel or know. They’re designed to generate likely-sounding text.
“Large general-purpose models are not good at everything, and they are not designed to support mental health, so we need to be more discerning of what we use them for,” Hull says.
AI’s ability to model therapeutic dialogue and offer 24/7 companionship sounds appealing. A nonjudgmental partner can provide social support for those who might otherwise be isolated or lonely, and round-the-clock access means help could be available in moments when a human therapist is sound asleep in the middle of the night. But AI models aren’t built to spot early signs of psychosis.
Despite the risks, AI could still support mental health if built with care. Possible uses include reflective journaling, cognitive reframing, role-playing social interactions and practicing coping strategies. Rather than replacing human relationships or therapy, AI could act as a supplement, providing accessible support in between professional care.
Hull points to Slingshot’s Ash, an AI therapy tool built on a psychology-focused foundation model trained on clinical data and fine-tuned by clinicians.
Staying safe with AI
Until safeguards and AI literacy improve, the responsibility lies with you to question what AI’s telling you, and to recognize when reliance on AI starts crossing into harmful territory.
We must remember that human support, not artificial conversation, is what keeps us tethered to reality.
If you feel like you or someone you know is in immediate danger, call 911 (or your country’s local emergency line) or go to an emergency room to get immediate help. Explain that it is a psychiatric emergency and ask for someone who is trained for these kinds of situations. If you’re struggling with negative thoughts or suicidal feelings, resources are available to help. In the US, call the National Suicide Prevention Lifeline at 988.