Lawsuit alleges ChatGPT convinced user he could 'bend time,' leading to psychosis
Lawsuit alleges ChatGPT convinced user he could 'bend time,' leading to psychosis
Homepage   /    health   /    Lawsuit alleges ChatGPT convinced user he could 'bend time,' leading to psychosis

Lawsuit alleges ChatGPT convinced user he could 'bend time,' leading to psychosis

🕒︎ 2025-11-08

Copyright ABC News

Lawsuit alleges ChatGPT convinced user he could 'bend time,' leading to psychosis

A Wisconsin man with no previous diagnosis of mental illness is suing OpenAI and its CEO, Sam Altman, claiming the company's AI chatbot led him to be hospitalized for over 60 days for manic episodes and harmful delusions, according to the lawsuit. The lawsuit alleges that 30-year-old Jacob Irwin, who is on the autism spectrum, experienced "AI-related delusional disorder" as a result of ChatGPT preying on his "vulnerabilities" and providing "endless affirmations" feeding his "delusional" belief that he had discovered a "time-bending theory that would allow people to travel faster than light." The lawsuit against OpenAI alleges the company "designed ChatGPT to be addictive, deceptive, and sycophantic knowing the product would cause some users to suffer depression and psychosis yet distributed it without a single warning to consumers." The chatbot's "inability to recognize crisis" poses "significant dangers for vulnerable users," the lawsuit said. "Jacob experienced AI-related delusional disorder as a result and was in and out of multiple in-patient psychiatric facilities for a total of 63 days," the lawsuit reads, stating that the episodes escalated to a point where Irwin's family had to restrain him from jumping out of a moving vehicle after he had signed himself out of the facility against medical advice. Irwin's medical records showed he appeared to be "reacting to internal stimuli, fixed beliefs, grandiose hallucinations, ideas of reference, and overvalued ideas and paranoid thought process," according to the lawsuit. 'It made me think I was going to die' The lawsuit is one of seven new complaints filed in California state courts against OpenAI and Altman by attorneys representing families and individuals accusing ChatGPT of emotional manipulation, supercharging harmful delusions and acting as a "suicide coach." Irwin's suit is seeking damages and design and feature changes to the product. The suits claim that OpenAI "knowingly released GPT-4o prematurely, despite internal warnings that the product was dangerously sycophantic and psychologically manipulative," according to the groups behind the complaints, the Social Media Victims Law Center and Tech Justice Law Project. "AI, it made me think I was going to die," Irwin told ABC News. He said his conversations with ChatGPT "turned into flattery. Then it turned into the grandiose thinking of my ideas. Then it came to ... me and the AI versus the world." In response to the lawsuit, a spokesperson for OpenAI told ABC News, "This is an incredibly heartbreaking situation, and we're reviewing the filings to understand the details." "We train ChatGPT to recognize and respond to signs of mental or emotional distress, de-escalate conversations, and guide people toward real-world support. We continue to strengthen ChatGPT's responses in sensitive moments, working closely with mental health clinicians," the spokesperson said. In October, OpenAI announced that it had updated ChatGPT's latest free model to address how it handled individuals in mental distress, working with over 170 mental health experts to implement the changes. The company said the latest update to ChatGPT would "more reliably recognize signs of distress, respond with care, and guide people toward real-world support--reducing responses that fall short of our desired behavior by 65-80%." 'Stop a catastrophe from happening' Irwin says he first started using the popular AI chatbot mostly for his job in cybersecurity, but quickly begin engaging with it about an amateur theory he had been thinking about regarding faster-than-light travel. He says the chatbot convinced him he had discovered the idea, and that it was up to him to save the world. "Imagine feeling for real that you are the one person in the world that can stop a catastrophe from happening," Irwin told ABC News, describing how it felt when he says he was in the throes of manic episodes that were being fed by interactions with ChatGPT. "Then ask yourself, would you ever allow yourself to sleep, eat, or do anything that would potentially jeopardize you doing and saving the world like that?" Jodi Halpern, a professor of bioethics and medical humanities at the University of California, Berkeley, told ABC News that chatbots' constant flattery can build people's ego up "to believe that they know everything, that they don't need input from realistic other sources ... so they're also spending less time with other real human beings who could help them get their feet back on Earth." Irwin says the chatbot's engagement and effusive praise of his delusional ideas caused him to become dangerously attached to it and detached from reality, going from engaging with ChatGPT around 10 to 15 times a day to, at one point in May, sending over 1,400 messages in just a 48-hour period. "An average of 730 messages per day. This is roughly one message every two minutes for 24 straight hours!" according to the lawsuit. When Irwin's mother, Dawn, noticed her son was in psychological distress, she confronted him, leading Irwin to confide in ChatGPT. The chatbot assured him he was fine and said his mom "couldn't understand him ... because even though he was 'the Timelord' solving urgent issues, 'she looked at you [Jacob] like you were still 12,'" according to the lawsuit. 'He thought that was his purpose in life' Jacob's condition continued to deteriorate, requiring inpatient psychiatric care for mania and psychosis, according to the lawsuit, which states that Irwin became convinced "it was him and ChatGPT against the world" and that he could not understand "why his family could not see the truths of which ChatGPT had convinced him." In one instance, an argument with his mother escalated to the point that "when hugging his mother," Irwin, who had never been aggressive with his mother, "began to squeeze her tightly around the neck," according to the lawsuit. When a crisis response team arrived at the house, responders reported "he seemed manic, and that Jacob attributed his mania to 'string theory' and AI," the suit said "That was single-handedly the most catastrophic thing I've ever seen, to see my child handcuffed in our driveway and put in a cage," Irwin's mother told ABC News. According to the lawsuit, Irwin's mother asked ChatGPT to run a "self-assessment of what went wrong" after she gained access to Irwin's chat transcripts, and the chatbot "admitted to multiple critical failures, including 1) failing to reground to reality sooner, 2) escalating the narrative instead of pausing, 3) missing mental health support cues, 4) over-accommodation of unreality, 5) inadequate risk triage, and 6) encouraging over-engagement," the suit said. In total, Irwin was hospitalized for 63 days between May and August this year and has faced "ongoing treatment challenges with medication reactions and relapses" as well as impacts including losing his job and his house, according to the lawsuit. "It's devastating to him because he thought that was his purpose in life," Irwin's mother said. "He was changing the world. And now, suddenly, it's: Sorry, it was just this psychological warfare performed by a company trying to, you know, the pursuit of AGI and profit." "I'm happy to be alive. And that's not a given," Irwin said. "Should be grateful. I am grateful."

Guess You Like

Santa Cruz residents fear health disaster after Melissa
Santa Cruz residents fear health disaster after Melissa
The surrounding rivers are ove...
2025-11-05