Copyright theage

“There’s a huge risk in this area,” Murray said. “We can’t rely on AI companies to self-regulate, particularly when it comes to issues of mental health and suicide risk. “A poor rendition of an interaction with a bot could lead to the loss of life. That’s the pointy end of looking after someone who’s vulnerable.” OpenAI chief executive Sam Altman has said that ChatGPT had been limited in its expression due to mental health concerns (the company has had legal action against it for its products’ alleged role in some youth suicides), but that the new mitigations meant it was free to make ChatGPT more personable and human-like. The company said only 0.15 per cent of users a week have conversations that indicate suicidal intent. But the company also claims to have 800 million weekly users, meaning more than a million instances of suicide ideation each week. Even if the latest ChatGPT performs as well in the real world as in testing – delivering messages that OpenAI considers to be compliant with its support goals 91 per cent of the time – that’s 108,000 people a week getting an experience that doesn’t meet the company’s self-defined standards.