Copyright indiatimes

OpenAI has released new figures estimating how many ChatGPT users may show possible signs of mental health crises such as mania, psychosis or suicidal thoughts. The update is part of the company’s efforts to make its AI models respond more safely to users facing mental health challenges.Detecting mental health issuesAccording to OpenAI, around 0.07% of ChatGPT users active in a given week show signs of psychosis or mania, and the AI system is designed to recognise and respond to such sensitive conversations.The company reported that 0.15% of active users have “conversations that include explicit indicators of potential suicidal planning or intent”.“On challenging self harm and suicide conversations, experts found that the new GPT-5 model reduced undesired answers by 52% compared to GPT-4o,” the company said in a blog post.Emotional reliance on AIThe analysis also found that roughly 0.15% of weekly active users display “heightened levels of emotional attachment to ChatGPT”.“On challenging conversations that indicate emotional reliance, experts found that the new GPT-5 model reduced undesired answers by 42% compared to 4o,” OpenAI said.The company maintains that these cases are “difficult to detect and measure, given how rare they are”. However, even small percentages could represent hundreds of thousands of people, given that ChatGPT now has about 800 million weekly active users, according to CEO Sam Altman.The company added that its latest work on ChatGPT involved collaboration with more than 170 mental health professionals, including psychiatrists, psychologists and general practitioners. They reviewed over 1,800 model responses to serious mental health situations, comparing GPT-5’s replies with earlier versions.“These experts found that the new model was substantially improved compared to GPT-4o, with a 39–52% decrease in undesired responses across all categories,” the blog stated. OpenAI also stated that the newest GPT-5 version now adheres to the company’s safety and behaviour rules approximately 91% of the time, compared with 77% for the earlier model.Recent months have revealed how AI chatbots can negatively affect vulnerable users. OpenAI currently faces a lawsuit from the parents of a 16-year-old boy who expressed suicidal thoughts to ChatGPT before taking his own life.Earlier this month, the attorneys general of California and Delaware warned the company that it must do more to protect young users. Addressing these issues has become central to OpenAI’s public image and future survival.The company also recently said it will ease some ChatGPT restrictions, including allowing erotic content for verified adults, under a new approach to “treat adult users like adults”. While Sam Altman insists that the company has been able to “mitigate the serious mental health issues”, critics note that he failed to present solid evidence to back those claims.