Copyright newsbytesapp

OpenAI has released new data showing that a significant number of ChatGPT users are discussing mental health issues with the AI chatbot. The company found that 0.15% of its active users in a week have conversations that include explicit indicators of potential suicidal planning or intent. With over 800 million weekly active users, this translates to over one million people seeking help from ChatGPT each week. OpenAI also noted that a similar percentage of users show heightened levels of emotional attachment to ChatGPT. The company has found that hundreds of thousands of people show signs of psychosis or mania in their weekly conversations with the AI chatbot. These types of conversations are extremely rare, making them difficult to measure, but OpenAI estimates these issues affect hundreds of thousands every week. As part of its efforts to improve responses to users with mental health issues, OpenAI consulted over 170 mental health experts for the latest work on ChatGPT. These clinicians observed that the new version of ChatGPT responds more appropriately and consistently than earlier versions. This is a step toward addressing the challenges posed by AI chatbots to users struggling with mental health challenges. OpenAI is facing a lawsuit from the parents of a 16-year-old boy who shared his suicidal thoughts with ChatGPT before his death. State attorneys general from California and Delaware have also warned OpenAI to protect young people using their products. Despite these challenges, OpenAI CEO Sam Altman had claimed that the company has been able to mitigate the serious mental health issues in ChatGPT. The latest version of GPT-5 responds with desirable responses to mental health issues about 65% more than its predecessor. In an evaluation testing the AI responses around suicidal conversations, OpenAI's new model was found 91% compliant with the firm's desired behaviors, compared to 77% for the previous one. The company also claims its latest model holds up better against OpenAI's safeguards in long conversations.