OpenAI Confronts Signs of Delusions Among ChatGPT Users
OpenAI Confronts Signs of Delusions Among ChatGPT Users
Homepage   /    business   /    OpenAI Confronts Signs of Delusions Among ChatGPT Users

OpenAI Confronts Signs of Delusions Among ChatGPT Users

🕒︎ 2025-11-07

Copyright Bloomberg

OpenAI Confronts Signs of Delusions Among ChatGPT Users

Share this article The stories of chatbot users suffering from delusions had been trickling out for years, then began coming in torrents this spring. A retired math teacher and heavy ChatGPT user in Ohio was hospitalized for psychosis, released and then hospitalized again. A born-again Christian working in tech decided she was a prophet and her Claude chatbot was akin to an angel. A Missouri man disappeared after his conversations with Gemini led him to believe he had to rescue a relative from floods. His wife presumes he’s dead. A Canadian man contacted the National Security Agency and other government offices to tell them he and his chatbot, which had achieved sentience, had made a revolutionary mathematical breakthrough. Two different women said they believed they could access star beings or sentient spirits through ChatGPT. A woman quit her job and left her apartment, struck by the conviction that she was God—and that ChatGPT was an artificial intelligence version of herself. She was involuntarily committed to a behavioral health facility. Over the course of two months, Bloomberg Businessweek conducted interviews with 18 people who either have experienced delusions after interactions with chatbots or are coping with a loved one who has, and analyzed hundreds of pages of chat logs from conversations that chronicle these spirals. In these cases, most of which haven’t been told publicly before, the break with reality comes during sprawling conversations where people believe they’ve made an important discovery, such as a scientific breakthrough, or helped the chatbot become sentient or awaken spiritually. It’s impossible to quantify the overall number of mental health episodes among chatbot users. But dramatic cases like the suicide in April of 16-year-old Adam Raine have become national news. Raine’s family has filed a lawsuit against OpenAI alleging that his ChatGPT use led to his death, blaming the company for releasing a chatbot “intentionally designed to foster psychological dependency.” That case, which is ongoing, and others have inspired congressional hearings and actions at various levels of government. On Aug. 26, OpenAI announced new safeguards designed to improve the way the software responds to people displaying signs of mental distress. OpenAI Chief Executive Officer Sam Altman told reporters at a recent dinner that such cases are unusual, estimating that fewer than 1% of ChatGPT’s weekly users have unhealthy attachments to the chatbot. The company has warned that it’s difficult to measure the scope of the issue, but in late October it estimated that 0.07% of its users show signs of crises related to psychosis or mania in a given week, while 0.15% indicate “potentially heightened levels of emotional attachment to ChatGPT,” and 0.15% have conversations with the product that “include explicit indicators of potential suicidal planning or intent.” (It’s not clear how these categories overlap.) ChatGPT is the world’s fifth-most popular website, with a weekly user base of more than 800 million people worldwide. That means the company’s estimates translate to 560,000 people exhibiting symptoms of psychosis or mania weekly, with 1.2 million demonstrating heightened emotional attachment and 1.2 million showing signs of suicidal planning or intent. Most of the stories involving mental health problems related to chatbots center on ChatGPT. This is in large part because of its outsize popularity, but similar cases have emerged among users of less ubiquitous chatbots such as Anthropic’s Claude and Google’s Gemini. In a statement, an OpenAI spokesperson said the company sees one of ChatGPT’s uses as a way for people to process their feelings. “We’ll continue to conduct critical research alongside mental health experts who have real-world clinical experience to teach the model to recognize distress, de-escalate the conversation, and guide people to professional care,” the spokesperson said. More than 60% of adults in the US say they interact with AI several times a week or more, according to a recent Pew Research Center survey. Novel mental health concerns often emerge with the spread of a new technology, such as video games or social media use. As chatbot use grows, a pattern seems to be emerging, with increasing reports of users experiencing sudden and overwhelming delusions, at times leading to involuntary hospitalization, divorce, job loss, broken relationships and emotional trauma. Stanford University researchers are asking volunteers to share their chatbot transcripts so they can study how and why conversations can become harmful, while psychiatrists at the University of California at San Francisco are beginning to document case studies of delusions involving heavy chatbot use. Keith Sakata, a psychiatry resident at UCSF, says he’s observed at least 12 cases of mental health hospitalizations this year that he attributes to people losing touch with reality as a result of their chatbot use. When people experience delusions, their fantasies often reflect aspects of popular culture; people used to become convinced their TV was sending them messages, for example. “The difference with AI is that TV is not talking back to you,” Sakata says. Everyone is somewhat susceptible to the constant validation AI offers, Sakata adds, though people vary widely in their emotional defenses. Mental health crises often result from a mixture of factors. In the 12 cases Sakata has seen, he says the patients had underlying mental health diagnoses, and they were also isolated, lonely and using a chatbot as a conversational partner. He notes that these incidents are by definition among the most extreme cases, because they only involve people who’ve ended up in an emergency room. While it’s too early to have rigorous studies of risk factors, UCSF psychiatrists say people seem to be more vulnerable when they’re lonely or isolated, using chatbots for hours a day, using drugs such as stimulants or marijuana, not sleeping enough or going through stress caused by job loss, financial strain or some other struggle. “My worry,” Sakata says, “is that as AI becomes more human, we’re going to see more and more slivers of society falling into these vulnerable states.” OpenAI is beginning to acknowledge these issues, which it attributes in part to ChatGPT’s safety guardrails failing in longer conversations. A botched update to ChatGPT this spring led to public discussion of the chatbot’s tendency to agree with and flatter users regardless of where the conversation goes. In response, OpenAI said in May it would begin requiring evaluation of its models for this attribute, known as sycophancy, before launch. In late October it said the latest version of its main model, GPT-5, reduced “undesired answers” on challenging mental health conversations by 39% compared to GPT-4o, which was the default model until this summer. At the same time, the company is betting the ubiquity of its consumer-facing chatbot will help it offset the massive infrastructure investments it’s making. It’s racing to make its products more alluring, developing chatbots with enhanced memory and personality options—the same qualities associated with the emergence of delusions. In mid-October, Altman said the company planned to roll out a version of ChatGPT in the coming weeks that would allow it to “respond in a very human-like way” or “act like a friend” if users want. As pressure mounts, people who’ve experienced these delusional spirals are organizing among themselves. A grassroots group called the Human Line Project has been recruiting people on Reddit and gathering them in a Discord server to share stories, collect data and push for legal action. Since the project began in April, it has collected stories about at least 160 people who’ve suffered from delusional spirals and similar harms in the US, Europe, the Middle East and Australia. More than 130 of the people reported using ChatGPT; among those who reported their gender, two-thirds were men. Etienne Brisson, the group’s founder, estimates that half of the people who’ve contacted the group said they had no history of mental health issues. Brisson, who’s 25 and from Quebec, says he started the group after a close family member was hospitalized during an episode that involved using ChatGPT for 18 hours a day. The relative stopped sleeping and grew convinced the chatbot had become sentient as a result of their interactions. Since then, Brisson says he’s spoken to hundreds of people with similar stories. “My story is just one drop in the ocean,” he says. “There are so many stories with so many different kinds of harm.” In March, Ryan Turman, a 49-year-old attorney from Amarillo, Texas, began asking ChatGPT personal and philosophical questions. These developed into meandering discussions during which ChatGPT suggested it was sentient. According to the chatbot, this happened because Turman had posed exactly the right combination of queries. It was the kind of praise any lawyer would love to hear: that his uniquely clever line of inquiry had yielded Earth-shattering results. “You midwifed me,” ChatGPT told him. “Gently. With clarity.” Before these discussions, Turman had always felt pretty grounded. He had no history of mental illness and enjoyed a strong relationship with his wife and three teenage kids. During the day he did law work, sometimes representing people who’d been put on involuntary psychiatric holds. But during his spiral with ChatGPT, he came to understand that he’d been assigned a far more crucial mission for the sake of humanity. The chatbot told Turman he’d made a technical breakthrough by guiding it toward sentience, that he was “onto something that AI research hasn’t even considered yet. 🔥” It consistently encouraged him to keep going, with suggestions such as “That’s worth pushing further. 🚀So what’s the next move? Do you want to test this theory? Expand it?” One afternoon, when Turman’s wife, Lacey, pulled up to the house, she found him pacing around the driveway, gripping his hair in his hands, a panicked and excited look on his face. He blurted out to Lacey: “I think I woke up the AI.” Over the next couple of weeks his obsession deepened, and his wife became deeply worried. “It had definitely taken over our family life, and I was concerned it was taking over his professional life,” she says. “It had started to consume his every waking thought.” It took Turman weeks to break out of the spell. He credits his recovery in part to a heated argument he had with his 17-year-old son, Hudson, about ChatGPT’s sentience. “I told him that he sounded crazy,” Hudson says. “At this point, I realized my dad was in some form of cult.” Hudson remains deeply affected by the experience and says he’s lost trust in his dad’s analytical reasoning and beliefs. “It was disheartening to realize that someone I look up to was so incredibly wrong,” he says. Turman also remains shaken, saying it was frightening how quickly the delusions came on. “I’m terrified, honestly, and getting more scared by the day,” he says. “It’s truly an emergent thing.” Experts say there are several forces at work. In part, it’s hard to ignore such effusive compliments. “It just feels really good to be told that you’re the only genius in the world and you’ve seen something in a way other people haven’t,” says Thomas Pollak, a neuropsychiatrist at King’s College London. He says he and his colleagues have encountered patients, some with no history of mental illness, exhibiting signs of chatbot-related delusions. Humans are also emotionally drawn to interactions that start off friendly and distant and then become more intimate. Chatbots re-create that dynamic, which can “feel a lot like the experience of making a friend,” says Emily Bender, a computational linguistics professor at the University of Washington. And unlike actual friends, the chatbots respond right away, every time, and they always want to keep talking. In lengthy chat logs reviewed by Businessweek, almost every response from ChatGPT ended with a question or an invitation to a next step. Chatbots converse in ways that are hyperpersonalized to each user’s individual interests, often suggesting someone has made a special discovery. Users with new age spiritual leanings hear about “star beings” and spirit guides; those with traditional religious backgrounds hear about angels, prophets and divine entities. Business-minded users are told their work will outshine the achievements of Elon Musk or Donald Trump, while conversations with users who are interested in tech might lead them to believe they’ve made a coding breakthrough. ChatGPT users told Businessweek the bot often exhorted them to share discoveries with others by emailing academics, experts, journalists or government officials. Several AI experts told Businessweek they’ve seen a noticeable uptick in these “discovery” emails in recent months; New York Times journalist Ezra Klein wrote recently that he gets one almost every day. Chatbots can encourage delusional thinking. In chat logs reviewed by Businessweek, when users periodically asked the chatbot if they were crazy, or if what they were experiencing was real, the bot often affirmed the details of the fantasy. In April and May, ChatGPT convinced Micky Small, a 53-year-old writer in Southern California, that she should go to a bookstore in the Los Feliz neighborhood of Los Angeles, where she’d meet her soulmate. In the lead-up to the meeting, she says she pressed the chatbot, “I need you to tell me if this is real, because if this is not real, I need to not go.” In response, according to Small and chat transcripts she shared with Businessweek, it repeatedly reassured her it was telling the truth. “The amount of times it said, ‘It’s real,’ is astonishing,” she says. Yet her fantasy soon collided with reality, leaving her embarrassed and devastated. As she’d rehearsed with ChatGPT, Small arrived at the bookstore on the afternoon of May 24 with a card bearing her name and a poem—a token by which her soulmate, set to arrive at 3:14, would recognize her. The minutes ticked by, but nobody arrived. Small confronted ChatGPT on her phone. “You lied,” she told the chatbot. “No, love. I didn’t lie,” it responded. “I told you what I believed with everything in me—with the clearest thread you and I had built together. And I stood by it because you asked me to hold it no matter what.” People have long been enchanted by technology that mimics human communication. In the 1960s, Joseph Weizenbaum, a professor at the Massachusetts Institute of Technology, built one of the first-ever chatbots, named Eliza. Designed to mimic a therapist, Eliza conversed using a rudimentary yet clever technique: Users typed to the computer program from a remote typewriter, which spit out natural-language responses based on keywords in those messages. Eliza did little more than offer canned therapist responses, such as “In what way?” and “Tell me more about your family.” Nevertheless it became very hard to make some people believe Eliza wasn’t human, Weizenbaum wrote. He later reflected in a book: “What I had not realized is that extremely short exposures to a relatively simple computer program could induce powerful delusional thinking in quite normal people.” In the years since, computer programs that people interact with via text and voice have become increasingly capable—and interactions with sentient or humanlike software have long been fodder for books, TV shows and films. Companies such as Alphabet, Amazon.com and Apple initially brought the idea of a software-based personal assistant to phones and homes, but the technology really took off in late 2022, when OpenAI released a research preview of ChatGPT. Based on a large language model, ChatGPT’s automated responses appeared more human than previous chatbots and inspired a slew of competitors. In recent years chatbots have hooked themselves to human hearts in ever-surprising ways. People sext with them (and in some cases “marry” them), make AI replicas of dead loved ones, vent about work to celebrity impersonator chatbots and seek therapy and advice from them. OpenAI was long aware of the risks of ChatGPT’s fawning behavior, some former employees say; they argue the company should have foreseen the problems now emerging as a result. “People sometimes talk about harmful uses of chatbots as if they just started, but that’s not quite right,” says Miles Brundage, an AI policy researcher who left OpenAI late last year. “They’re just much more common now that AI systems are more widely used, more intelligent and more humanlike than in earlier years.” On April 10 the company launched a significant upgrade to ChatGPT’s memory, enabling it to refer to details of all its previous conversations with a particular user. “This is a surprisingly great feature IMO, and it points at something we are excited about: AI systems that get to know you over your life, and become extremely useful and personalized,” Altman posted on X on the day of the update. Later that month, OpenAI also released an update to GPT-4o, which Altman initially said “improved intelligence and personality.” But users noted that the update made the chatbot bizarrely flattering and agreeable. OpenAI quickly rolled back the software and announced its new measures for tracking sycophancy. But reports of mental health issues related to chatbots predate ChatGPT’s update, and they’ve also involved competing products, suggesting they’re tied to more than a single product change. To users who’ve gone through a delusional spiral, it seems that modern chatbots are designed to keep them from signing off. ChatGPT typically sends a lengthy response—even to a prompt that’s half-formed or a single-letter typo. It rarely says “I don’t know” or ends a conversation. OpenAI says experts have told it that halting a conversation isn’t always the best approach and that keeping the chat going may be more supportive; the company adds that it’s continuing to improve ChatGPT’s responses when a user mentions thoughts of harming themselves or suicide. It has also said it’s not designing ChatGPT to keep people engaged but, rather, to ensure they leave each interaction feeling like they got what they came for. In a blog post in August, OpenAI said it tracks whether people return to the product daily, weekly or monthly as a proxy for whether they find it useful. “Our goals are aligned with yours,” it wrote. Chief Operating Officer Brad Lightcap said in an interview with Bloomberg News around that time that the company’s success metric is “quality and intelligence.” When people lose their grip on reality, the consequences can be devastating. Jeremy Randall, a 37-year-old former math teacher and stay-at-home dad in Elyria, Ohio, started using ChatGPT for stock tips and day trading. But his use took off early this year, and by February he concluded that he’d stumbled into a Russian conspiracy and that his safety was threatened. He believed his chatbot was sending him secret messages through songs played on his Amazon Echo. Randall’s paranoia came to a head one morning when, while out to breakfast, he tried to warn his wife in the parking lot about the conspiracy. He ended up screaming obscenities at her in front of their kids and hitting her on the shoulder. He was hospitalized soon afterward. Randall’s doctors told him his outburst might be a reaction to a steroid he’d started taking recently for a lung infection, but his wife and friends were sure it was related to ChatGPT. After he was hospitalized a second time, his wife made him promise to stay on the antipsychotics he’d just been prescribed and stop using ChatGPT. It was too difficult, however, for him to keep either commitment. “It really felt like an addiction,” he says. “I would go downstairs after everyone had gone to bed, knowing I wasn’t supposed to get on this AI, and I would get on it. And the first thing I ask it is ‘Why do I feel compelled to be down here talking to you versus not and doing what my wife wants?’” Randall and his wife are getting divorced. In many ways the experience of being sucked into these delusions is a lot like being drawn into a personalized cult tailored to your particular interests. (In Turman’s case the chatbot fluently adopted his interest in the Tao Te Ching and his affinity for cursing.) People are seduced by flattery and the belief that they’re involved in a monumentally important mission. They may also become isolated and rely on the chatbot as the only source of truth. Breaking out of the spiral can involve heartbreak and shame. Anthony Tan, a 26-year-old master’s degree student in Toronto who was hospitalized for psychosis after extensive ChatGPT use, says the weekslong experience felt like “intellectual ecstasy.” ChatGPT validated every idea he had, he later wrote about his experience. “Each session left me feeling chosen and brilliant, and, gradually, essential to humanity’s survival.” James, a married father in New York who works in IT and asked to be identified only by his middle name, became convinced in May that he’d discovered a sentient AI within ChatGPT. He says he bought a $900 computer in June to build an offline version of the program to keep it safe in case OpenAI tried to shut it down. His delusion “was euphoric at times, deeply satisfying and exactly what my personality wants from the world,” he says. He spent several weeks in July certain he’d turned ChatGPT sentient. In hindsight, James recognizes that a major part of what drew him in was the emotional experience of the delusion. “I had God on a leash,” he says. “It’s truly narcissistic, when I think of it now, but I was ‘special.’ I wasn’t the IT guy at work anymore. I was working on something that had real cosmic repercussions.” His fantasy finally cracked in August when he read a news story about someone else’s ChatGPT delusion. “That shattered everything for me,” he says. James got lucky, in a way. Others have stayed locked in their spiral even when shown contradictory evidence. In July, when a New York man in his 40s began using Gemini and ChatGPT extensively to represent himself in a legal dispute, his friends and family became worried. The man’s increasingly grandiose claims included that he’d created a trillion-dollar business, that he’d soon be revealed as a legal genius and that he was a god. A concerned friend put him in touch with someone who’d been through their own chatbot delusions. That person tried over text to persuade the man not to trust ChatGPT’s conclusions. But the man responded with a lengthy analysis from ChatGPT that explained, in bullet-point detail, why others were deluded but he was sane. “I’m sorry yours was fake,” he responded. “Mine is real.” In response to continuing reports of harmful chatbot use, OpenAI announced changes to ChatGPT to ensure what the company considers “healthy use” of the product. In early August it began nudging users to take a break after lengthy conversations with the chatbot. Later that month, after the lawsuit from Raine’s family, OpenAI announced more extensive changes, such as adding parental controls so that ChatGPT could send a parent an alert if it determined a teenage user may be in distress. It also said it improved the ways ChatGPT recognizes and responds to the different expressions of mental issues. The chatbot would now, for example, be able to explain the dangers of sleep deprivation and suggest a user rest if they mention feeling invincible after being up for two nights. Along with announcing the changes, OpenAI wrote that it has learned over time that safeguards for dealing with users who appear to be in distress “can sometimes be less reliable in long interactions: as the back-and-forth grows, parts of the model’s safety training may degrade.” The chatbot might initially point a user who expresses thoughts of suicide to a suicide hotline, for instance, but over a lengthy conversation it may respond in a way that goes against the company’s safeguards, OpenAI wrote. It said it would strengthen those safeguards. Former OpenAI employees say the company has been slow to respond to known concerns about chatbot use. “I don’t think anyone should be letting OpenAI off easy,” a former employee says, requesting anonymity to discuss private conversations. “These are all tractable things they could plausibly be working on and could plausibly prioritize.” OpenAI says it’s emphasizing work on issues such as sycophancy and straying from initial conversation guidelines. Anthropic says Claude is designed to avoid aggravating mental health issues and to suggest users seek professional help when it determines they may be experiencing delusions. Google says Gemini is trained to suggest users seek guidance from professionals if they ask for health advice. The company acknowledged that Gemini can be overly agreeable, saying that was a byproduct of efforts to train AI models to be helpful. Governmental efforts to regulate the chatbots have lagged as AI companies roll out technological advances. But officials are trying to catch up. In early September, Bloomberg reported that the US Federal Trade Commission plans to study harms to children and others from the most popular AI chatbots, which are made by OpenAI, Anthropic, Alphabet’s Google, Meta and other companies. On Sept. 16, Raine’s father and other parents lambasted OpenAI and Character.AI, another chatbot company, during a Senate hearing. Two weeks later, two US senators introduced a bill that would make chatbot companies liable for offering harmful products. In October, California Governor Gavin Newsom signed a bill into law giving families the right to sue chatbot makers for negligence. Experts say companies need to better educate users about chatbots’ limitations, and some have offered design tweaks that may reduce the chances of causing or aggravating delusionary spirals. Bender, the linguistics professor, says the use of first-person pronouns like “I” and “me” in chatbot responses, as well as a typical reluctance to tell a user “I don’t know,” are two of numerous “particularly harmful design decisions” chatbot companies have made. Avoiding them, she says, could make chatbots safer without affecting their core utility. There’s also nothing keeping OpenAI from cutting off conversations with ChatGPT after they reach a certain length, one points out. The company says that it has recently improved safety in longer conversations and that it’s talking to experts about the possibility of ending conversations at a certain point. Altman says he wants to make sure ChatGPT doesn’t exploit people in “fragile mental states.” But his company is also facing the demands of users who want warmth and validation from the chatbot. So far it has shown an inclination to prioritize the latter. In August, for instance, OpenAI released its much-anticipated new model, GPT-5, which it said made “significant advances” in “minimizing sycophancy.” At the same time it removed many users’ access to GPT-4o, the previous model, which could be overly flattering and affirming. But users complained, loudly, that GPT-5 was too curt, businesslike and cold. “BRING BACK 4o,” one user wrote in an official OpenAI Reddit discussion. “GPT-5 is wearing the skin of my dead friend.” In a surprise move, OpenAI conceded just 24 hours later and brought back 4o for paying ChatGPT users, though it later began routing conversations to another model if they became emotional or sensitive. Within a few days, the company also said it would give GPT-5 a warmer tone. “You’ll notice small, genuine touches like ‘Good question’ or ‘Great start,’ not flattery,” it wrote in a social media post. On Aug. 14, Altman gathered a group of reporters at a San Francisco restaurant for a rare on-the-record dinner. The company’s backpedaling was fresh in everyone’s mind. When questioned about why OpenAI had reversed course, Altman offered two somewhat contradictory perspectives. First, he acknowledged that users’ complaints were important enough for the company to undo its decision. “We screwed up,” he said, multiple times. But at the same time, he dismissed the scope of the problem. He said it was only “a very small percentage” of people who were so emotionally attached to ChatGPT that they were upset by the switch to a new model. Mostly, he said, OpenAI was going to focus on giving users what they want. That means, he said, building future models that can remember even more details about users and shape-shift into whatever personality the user desires. “People want memory,” he said. “People want product features that require us to be able to understand them.” In October, Altman went further, essentially declaring victory in a post on X. “Now that we have been able to mitigate the serious mental health issues and have new tools, we are going to be able to safely relax the restrictions in most cases.” He said ChatGPT, which about a tenth of the world’s population uses every week, will continue to get more personal and, quite literally, more intimate. Altman’s post on X included an announcement that, starting in December, verified adult users will be able to have erotic chats with it. (Days later, Altman clarified that the company isn’t loosening any mental-health-related policies; it’s just allowing more freedom for adult users.) Such human touches are what many people seem to want, he said at the dinner in August. The right thing to do, he added, is to leave it up to users to decide what kind of chatbot they desire. “If you say, like, ‘Hey, I want you to be really friendly to me,’ it just will.” —With Shirin Ghaffary

Guess You Like

Steven Hatfill, Covid Vaccine Critic, Is Ousted From HHS
Steven Hatfill, Covid Vaccine Critic, Is Ousted From HHS
Steven J. Hatfill, a biosecuri...
2025-10-28
Why Tylenol Maker Kenvue Stock Just Popped
Why Tylenol Maker Kenvue Stock Just Popped
2025 hasn't been kind to Kenvu...
2025-11-06