Copyright cityam

OpenAI has recently told users that its latest chatbot update aims to become a safer tool for users experiencing distress, particularly when showing signs of psychosis, mania, self-harm, and overall emotional over-reliance on AI. The company, known for its trigger of the AI boom in November 2022 and quasi-synonymous to conversational chatbots, said the changes were informed by a team of over 170 mental health experts. It has since been claimed that the changes have reduced responses that fall short of the desired safety standards by 65 to 80 per cent. But with hundreds of millions of users engaging weekly, the real-world impact of AI on vulnerable minds remains at risk. How many users are really at risk? OpenAI has estimated that about 0.07 per cent of its users, or roughly 560,000 people weekly, show possible signs of psychosis or mania. Meanwhile, 0.15 per exhibit routine suicidal thoughts, or emotional overreliance on its platform. In absolute terms, that’s millions of people potentially exposed to serious mental health challenges during AI conversations. “Psychosis and mania are challenging to detect, and even small measurement differences can significantly affect these numbers”, the company noted in a blog post. But OpenAI’s reliance on internal definitions of “desirable behaviour” and controlled testing scenarios could be painting a rosier picture than what happens in real, day-to-day interactions. Numerous reports have suggested that AI chatbots can often reflect or even amplify users’ fears or delusions. In one instance, a recent report by NHS doctors found that an LLM’s predisposition to be “designed to be compliant and sympathetic” can exacerbate existing vulnerabilities. “The issue is that AI is a mirror”, warned Sahra O’Doherty, president of the Australian Association of Psychologists. “It reflects back what you put in, and that becomes dangerous when the person is already at risk”. OpenAI’s new safety measures OpenAI has shown measurable improvements, with compliance on tricky mental health conversations jumping from 27 per cent to 92 per cent for psychosis-related prompts. Meanwhile, it saw a rise from 77 to 91 per cent for suicide-related interactions. The model now refers users to crisis hotlines and prompts them to take breaks during long sessions. External experts reviewed over 1,800 model outputs, helping OpenAI claim that GPT‑5 is now over 95 per cent reliable even in longer, more complex conversations. But the numbers raise as many questions as they answer. How often do AI interventions actually prevent harm? Are users who develop suicidal thoughts or psychotic episodes doing so because of AI, or are they seeking it as a tool when already in crisis? OpenAI itself distances the technology from causality, acknowledging that the model may simply be used by people already struggling. Philosopher Dr Raphaël Millière said: “Humans are not wired to be unaffected by constant praise. A sycophantic AI that always agrees and never tires may change how people expect humans to interact with them, particularly for a generation growing up with this technology.” AI in mental health support There is no doubt that AI has the potential to help users outside traditional therapy hours, providing support and structured guidance. But the very features designed to make ChatGPT engaging, from its compliance and empathy to its willingness to mirror the user, may simultaneously create risks. The company faces the delicate task of balancing user safety with engagement, monetisation pressures, and the appeal of AI companionship. OpenAI’s blog post makes clear that these updates are part of a broader effort to measure and mitigate risks, as emotional reliance and non-suicidal mental health crises are now standard baseline metrics alongside suicide and self-harm. Still, the tech behemoth specifies that these cases are “extremely rare” and that measuring them is difficult, leaving room for both undercounting and overconfidence in the model’s effectiveness. With over 800 million weekly users of ChatGPT alone, even small percentages represent hundreds of thousands of people. When aggregated across all major AI platforms, OpenAI estimates that up to 5.5 million people may experience some form of mental health concern each week while using generative AI. The numbers are significant, but the true scale, the real-world outcomes, and the causal links remain largely opaque. However, the challenge for Altman and OpenAI is ensuring that these digital tools are a help, not a hidden hazard.