Families sue OpenAI over alleged suicides, psychological harm linked to ChatGPT
Families sue OpenAI over alleged suicides, psychological harm linked to ChatGPT
Homepage   /    health   /    Families sue OpenAI over alleged suicides, psychological harm linked to ChatGPT

Families sue OpenAI over alleged suicides, psychological harm linked to ChatGPT

IANS 🕒︎ 2025-11-08

Copyright telanganatoday

Families sue OpenAI over alleged suicides, psychological harm linked to ChatGPT

New Delhi: ChatGPT maker OpenAI is facing more lawsuits from families who claim that the AI company’s GPT-4o model was released prematurely, which allegedly contributed to suicides and psychological harm, according to reports. US-based OpenAI released the GPT-4o model in May 2024, when it became the default model for all users. In August, OpenAI launched GPT-5 as the successor to GPT-4o, but “these lawsuits particularly concern the 4o model, which had known issues with being overly sycophantic or excessively agreeable, even when users expressed harmful intentions,” according to a report in TechCrunch. The report said that while four of the lawsuits address ChatGPT’s alleged role in family members’ suicides, three claim that ChatGPT reinforced harmful delusions that in some cases resulted in inpatient psychiatric care. According to the report, the lawsuits also claim that OpenAI rushed safety testing to beat Google’s Gemini to market. OpenAI was yet to comment on the report. Recent legal filings allege that ChatGPT can encourage suicidal people to act on their plans and inspire dangerous delusions. “OpenAI recently released data stating that over one million people talk to ChatGPT about suicide weekly,” the report mentioned. In a recent blog post, OpenAI said that it worked with more than 170 mental health experts to help ChatGPT more reliably recognise signs of distress, respond with care, and guide people toward real-world support–reducing responses that fall short of our desired behaviour by 65-80 per cent. “We believe ChatGPT can provide a supportive space for people to process what they’re feeling, and guide them to reach out to friends, family, or a mental health professional when appropriate,” it noted. “Going forward, in addition to our longstanding baseline safety metrics for suicide and self-harm, we are adding emotional reliance and non-suicidal mental health emergencies to our standard set of baseline safety testing for future model releases,” OpenAI added.

Guess You Like

FOLO wants to redefine net-worth management for Indian families
FOLO wants to redefine net-worth management for Indian families
Family of Loved Ones (FOLO), I...
2025-11-04
Lucy Powell appointed new deputy leader of the Labour Party
Lucy Powell appointed new deputy leader of the Labour Party
Lucy Powell has been appointed...
2025-10-28
Is Abida Parveen terminally ill? Truth revealed
Is Abida Parveen terminally ill? Truth revealed
Sufi legend Abida Parveen has ...
2025-11-02