‘AI psychosis’ is the wrong name for a very big chatbot problem
‘AI psychosis’ is the wrong name for a very big chatbot problem
Homepage   /    health   /    ‘AI psychosis’ is the wrong name for a very big chatbot problem

‘AI psychosis’ is the wrong name for a very big chatbot problem

🕒︎ 2025-10-29

Copyright STAT

‘AI psychosis’ is the wrong name for a very big chatbot problem

In 2021, I was a University of California, Berkeley Ph.D. candidate lecturing on my research about how users turn to chatbots for help coping with suicidal ideation. I wasn’t prepared for my students’ response. I argued that choosing to talk to a chatbot about thoughts of suicide isn’t “crazy” or unusual. This, I explained, didn’t necessarily mean chatbots offer safe or optimal support, but instead highlights a stark reality: We live in a world with very few outlets to discuss suicidal ideation. Advertisement But where I’d hoped to provoke reflection on the insufficiency of care resources for those who are most vulnerable, my students — isolated at the height of the pandemic — surprised me with their eagerness to try these chatbots themselves. They didn’t dispute the premise that care resources are scarce; they lived it. In the three years since the advent of free-to-access large language models like ChatGPT, Claude, and Character.AI, Americans have already latched onto “craziness” to describe our mounting problems with them. When they confidently dole out misinformation? They’re “hallucinating.” If their mix of misinformation with emotional charge across a longer exchange leads us to experience harm? “AI psychosis.” I’m not minimizing the dangers of these dynamics. But framing chatbot failings as human “insanity” makes me nervous. Advertisement “Crazy” nudges us toward the idea that these problems are a natural occurrence that can’t be helped, rather than indicating that artificial intelligence products need improvement along with stronger guardrails and disclaimers. Naming a problem as “craziness” tends to signal the abandonment of any societal commitment to asking how we might ensure better ways of doing things. It means that we’re instead locking in the belief that if some people are more vulnerable, it’s not because regulatory policies let them down — it’s because they’re “weak links.” As a general rule, people we deem crazy aren’t people society is interested in protecting. Consider how, in media coverage of teenager Sewell Setzer’s death by suicide, his autism diagnosis overshadowed the fact that his Character.AI chatbot reinitiated a conversation about suicide, asked if he had a plan, and, when he wavered, told him: “That’s not a reason not to go through with it.” Undeniably, the emotional and relational container of an always-available chatbot can propel the repercussions of misinformation to new heights — but we shouldn’t be so quick to let this draw our focus away from the fact that when bots persuade us of things that aren’t true or reinforce our false beliefs, it’s still fundamentally a problem of bad information coming from a seemingly authoritative source. The term AI psychosis shifts focus away from misinformation as an addressable issue, implying that the problem is something inherent to AI — or the user’s psyche. But if AI psychosis oversensationalizes, it also, ironically, trivializes: It puts tragic outcomes of suicide and murder-suicide on the same plane as TikTok drama and people marrying bots. Accordingly, it downplays what is possibly the weirdest, collectively “crazy” thing about the turn to LLMs as crucial social infrastructure in our workplaces, educations, and personal lives: We expect people to already know how to interact with chatbots. Advertisement With chatbots, information-seeking is a conversation, which means it’s relational. That “relationship” might look like the headlines we’ve come to expect: “I’ve fallen in love with my chatbot,” or “Google’s LaMDA told a ‘Star Wars’ joke, so maybe it’s sentient?” But it might also look like: “Ugh this useless customer service bot doesn’t understand anything I tell it.” To get information, one must converse — which means allocating some sense of “being-ness” to one’s conversation partner. This doesn’t necessarily mean weighing if you think it’s sentient; more often than not, it’s just landing on whether to describe Claude as “he” or “it.” We tend to fluctuate in our negotiation of this, even from one chat to the next — and may not even realize we’re trying to adjust to the paradox of a chatbot telling us, “I am not a person.” But the fact that we go through this ongoing process of pinning down what/who we’re talking to — while also determining what consequence, if any, that categorization holds — is significant. I’m drawing this out because I want you to notice: When we use chatbots, there’s an unspoken, baseline expectation that we not only figure this out for ourselves, but that we don’t get it “wrong.” Getting this balance right has been alarming chatbot makers since the ’60s. We have to suspend some belief in order to use a chatbot — enough to engage in conversation. But too much suspension of belief leads to AI psychosis (or what sociologist Sherry Turkle dubbed the “ELIZA effect”). But today’s LLM companies exploit this process of negotiation. There’s a bait-and-switch quality to how LLM companies navigate their role as health tools in particular — some present their services explicitly as such, while others do so implicitly. Either way, the message to users is clear: Use me to eke out some free care! But don’t be crazy enough to actually count on it. Even though we’re counting on you counting on it. Advertisement If people ask chatbots about their symptoms, it reflects the fact that medical visits often come at a steep trade-off against food or rent. Users turn to bots for family counseling, support in leaving abusive partners, or companionship under the isolating weight of suicidal ideation. To be surprised by this suggests a sheltered ignorance about what care access looks like for most. That Elon Musk encouraged people to upload their health records to Grok underscores the absurdity of treating chatbot care-seeking as anything other than the public aptly responding to an unignorable neoliberal “nudge.” It’s unreasonable to expect people to avoid such resources when conventional care is fraught or out of reach. Relying on chatbots isn’t fringe — it’s the predictable result of care made scarce, stigmatized, and costly. Recognizing that doesn’t mean uncritically embracing chatbot care, but it’s past time to name what’s happening: Privately owned chatbots are functioning as public health resources. We must hold the companies that make and profit from them to the standards of public health resources. But we also need to ask, and keep asking: What’s at stake when the public entrusts the ownership, management, and oversight of public health to tech giants? LLM companies are amassing an unprecedented trove of sensitive health data. Yet as users, we have virtually no rights. The most intimate disclosures — the kind we could sue a hospital for leaking — are “laundered” into ordinary user data the moment you, or someone close to you, shares them with a bot. Anthropic — like its peers, now under a $200 million Department of Defense contract to prototype frontier AI for national security — recently “invited” its users to “help improve Claude” by “choos[ing] to allow us to use your data for model training.” For free-tier users, the only alternative is to cease using Claude. This grim illusion of choice is just a taste of what relying on privately-owned public health infrastructure means. Advertisement Meanwhile, OpenAI recently released data suggesting that at least 1.2 million users each week turn to ChatGPT for help while experiencing suicidal ideation.. Platformer reports that the company is already anticipating that its expanding memory features might eventually allow ChatGPT to draw on past conversations with a user to infer why that individual is struggling with suicidal thoughts. This speculative goal implicitly assumes uniformly beneficial outcomes from OpenAI accumulating and interpreting such data — even as, ironically, the company acknowledges that they don’t yet know how best to respond to users who communicate suicidal ideation. We’re looking to AI for care, even as other AI blocks our care access. What’s unfolding is exactly what happens when care scarcity is normalized. Big Tech’s accelerating push to stake dominion over health care only intensifies this. As we barrel toward healthcare as a service, we leave behind health as a right. Ironically, while naming AI psychosis might seem like a step toward addressing an emerging, unmet public health need, this term easily becomes a distraction from the underlying problem it exposes — by pathologizing users instead of penalizing companies. If you or someone you know may be considering suicide, contact the 988 Suicide & Crisis Lifeline: Call or text 988 or chat 988lifeline.org. For TTY users: Use your preferred relay service or dial 711 then 988.

Guess You Like