I Worked at OpenAI. It’s Not Doing Enough to Protect People.
I Worked at OpenAI. It’s Not Doing Enough to Protect People.
Homepage   /    health   /    I Worked at OpenAI. It’s Not Doing Enough to Protect People.

I Worked at OpenAI. It’s Not Doing Enough to Protect People.

🕒︎ 2025-10-28

Copyright The New York Times

I Worked at OpenAI. It’s Not Doing Enough to Protect People.

I’ve read more smut at work than you can possibly imagine, all of it while working at OpenAI. Back in the spring of 2021, I led our product safety team and discovered a crisis related to erotic content. One prominent customer was a text-based adventure role-playing game that used our A.I. to draft interactive stories based on players’ choices. These stories became a hotbed of sexual fantasies, including encounters involving children and violent abductions — often initiated by the user, but sometimes steered by the A.I. itself. One analysis found that over 30 percent of players’ conversations were “explicitly lewd.” After months of grappling with where to draw the line on user freedom, we ultimately prohibited our models from being used for erotic purposes. It’s not that erotica is bad per se, but that there were clear warning signs of users’ intense emotional attachment to A.I. chatbots. Especially for users who seemed to be struggling with mental health problems, volatile sexual interactions seemed risky. Nobody wanted to be the morality police, but we lacked ways to measure and manage erotic usage carefully. We decided A.I.-powered erotica would have to wait. OpenAI now says the wait is over, despite the “serious mental health issues” plaguing users of its ChatGPT product in recent months. On Oct. 14, its chief executive, Sam Altman, announced that the company had been able to “mitigate” these issues thanks to new tools, enabling it to lift restrictions on content like erotica for verified adults. As commentators pointed out, Mr. Altman offered little evidence that the mental health risks are gone or soon will be. I have major questions — informed by my four years at OpenAI and my independent research since leaving the company last year — about whether these mental health issues are actually fixed. If the company really has strong reason to believe it’s ready to bring back erotica on its platforms, it should show its work. A.I. is increasingly becoming a dominant part of our lives, and so are the technology’s risks that threaten users’ lives. People deserve more than just a company’s word that it has addressed safety issues. In other words: Prove it. I believe OpenAI wants its products to be safe to use. But it also has a history of paying too little attention to established risks. This spring, the company released — and after backlash, withdrew — an egregiously “sycophantic” version of ChatGPT that would reinforce users’ extreme delusions, like being targeted by the F.B.I. OpenAI later admitted to having no sycophancy tests as part the process for deploying new models, even though those risks have been well known in A.I. circles since at least 2023. These tests can be run for less than $10 of computing power. After OpenAI received troubling reports, it said it replaced the model with a “more balanced” and less sycophantic version. ChatGPT nonetheless continued guiding users down mental health spirals. OpenAI has since said that such problems among users “weigh heavily” on the company and described some intended changes. But the important question for users is whether these changes work. Thank you for your patience while we verify access. If you are in Reader mode please exit and log into your Times account, or subscribe for all of The Times. Thank you for your patience while we verify access. Already a subscriber? Log in. Want all of The Times? Subscribe.

Guess You Like

A shiver of ASMR
A shiver of ASMR
Long before ASMR became a thin...
2025-10-28
Hassan Ayariga’s V8 crashes at Ashaiman overhead
Hassan Ayariga’s V8 crashes at Ashaiman overhead
A serious road accident on Fri...
2025-10-28