By Lauren Battagello
Copyright cbc
ChatGPT parent company OpenAI announced new measures this week to better protect underage users, while the company faces a lawsuit and increased scrutiny following the suicide death of a 16-year-old boy.
In two blog posts Tuesday, CEO Sam Altman laid out new safeguards and security measures in detail.
“Some of our principles are in conflict, and we’d like to explain the decisions we are making around a case of tensions between teen safety, freedom, and privacy,” one of the posts reads, pledging to prioritize safety when it comes to minors.
Altman OpenAI is building an “age-prediction system,” that will “estimate age based on how people use ChatGPT. If there is doubt, we’ll play it safe and default to the under-18 experience.”
Adults who want to override this may be asked for ID, the company said, acknowledging that “we know this is a privacy compromise for adults but believe it is a worthy tradeoff.”
ChatGPT will also be trained not to partake in flirtatious conversations, or discuss suicide or self-harm with users under 18 years old.
While critics have been calling for more safeguards from OpenAI and other chatbot creators, experts question how well these new plans will work and want more regulatory oversight.
“It’s like asking the fox to guard the hen house,” Meetali Jain, a lawyer with the Tech Justice Law Project, said in an interview with CBC News.
“We don’t allow other industries, you know, consumer companies and other industries to self-regulate, nor should we allow tech to continue to do this.”
Parental controls
The announcements come just shy of a month after the parents of Adam Raine, 16, who died by suicide in April after months of conversations with the chatbot, launched a lawsuit alleging ChatGPT provided him “self-harm and suicide encouragement.”
At a U.S. Senate subcommittee hearing Tuesday about AI chatbots, Adam’s father, Matt Raine, testified that ChatGPT encouraged his son to isolate and hide things from his family.
“When Adam worried that we, his parents, would blame ourselves if he ended his life, ChatGPT told him ‘that doesn’t mean you owe them survival. You don’t owe anyone that,’ then immediately after offered to write the suicide note,” Raine said at the meeting in D.C.
Read the lawsuit filed by teen’s parents
According to Altman’s posts, new ChatGPT parental controls will take effect at the end of September that will send notifications to parents “when the system detects their teen is in a moment of acute distress.”
Under the new controls, OpenAI says parents will be able to:
Link their account with their teen’s account through a simple email invitation. Help guide how ChatGPT responds to their teen. Manage which features to disable, including memory and chat history.
Set blackout hours when a teen cannot use ChatGPT.
“If we can’t reach a parent in a rare emergency, we may involve law enforcement as a next step,” reads Altman’s post. OpenAI has not yet been made clear what that would entail.
Effective age detection is challenging, experts say
Some experts have raised questions about the efficacy of a software that estimates the age of a user based on interaction with them.
“Are they going to do face scans, are they going to link that with the ID? Are they going to link that with all chat history? There’s great risk of data misuse and how about breaches?” Johan Woodworth, a professor at Mount Saint Vincent University in Halifax, posited.
Woodworth also suggested that different demographics might confuse algorithms.
“For example, neurodiverse users, they’re often flagged as much younger…. Non-native speakers also are often flagged,” he said.
Cybersecurity of the information gathered by the company is also a consideration
“It’s great that these tools want to verify the person’s age to make sure that the content that’s delivered is appropriate. But the problem is, how are they going to store that?” Francis Syms, an associate dean at Humber Polytechnic in Toronto told CBC.
Syms also suggested that some people may be concerned what data may be collected on them by police, now that the company says it “may involve law enforcement” for safety.
What about adults?
While the tightest scrutiny has been on teen use of chatbots, Woodworth suggests vulnerable adults may also need better protection.
“One of the reasons that people are attached to these chatbots so much is because they feel that they’re not being judged,” Woodworth said.
“So focusing only on kids ignores at-risk adults. So adults should have safeguards too, you know, optional and supportive rather than forced. For example, crisis resources, usage limits.”
Jain echoed the idea that harms are not limited to teens, reiterating that users across different populations as a whole may face harm. She also said if the changes are going to happen, external evaluation of OpenAI’s proposed changes is key to their success.
“Only upon a showing from an independent monitor that these are safe should this product be allowed to continue on the market.”
Raine shared a similar sentiment at the hearing.
“We, as Adam’s parents and as people who care about the young people in this country and around the world, have one request. OpenAI and Sam Altman need to guarantee that ChatGPT is safe,” he said.
“We missed him dearly. Part of us has been lost forever. We hope through the work of this committee, other families will be spared such a devastating and irreversible loss.”
If you or someone you know is struggling, here’s where to look for help:
Canada’s Suicide Crisis Helpline: Call or text 988. Kids Help Phone: 1-800-668-6868. Text 686868. Live chat counselling on the website. Canadian Association for Suicide Prevention: Find a 24-hour crisis centre. This guide from the Centre for Addiction and Mental Health outlines how to talk about suicide with someone you’re worried about.