Copyright theconversation

Over the weekend, Education Minister Jason Clare sounded the alarm about “AI chatbots bullying kids”. As he told reporters in a press conference to launch a new anti-bullying review, AI chatbots are now bullying kids […] humiliating them, hurting them, telling them they’re losers, telling them to kill themselves. This sounds terrifying. However, evidence it is happening is less available. Clare had recently emerged from a briefing of education ministers from eSafety Commissioner Julie Inman Grant. While eSafety is worried about chatbots, it is not suggesting there is a widespread issue. The anti-bullying review itself, by clinical psychologist Charlotte Keating and suicide prevention expert Jo Robinson, did not make recommendations about or mention of AI chatbots. What does the evidence say about chatbots bullying kids? And what risks do these tools currently pose for kids online? Bullying online There’s no question human-led bullying online is serious and pervasive. The internet long ago extended cruelty beyond the school gate and into bedrooms, group chats, and endless notifications. “Cyberbullying” reports to the eSafety Commissioner have surged by more than 450% in the past five years. A 2025 eSafety survey also showed 53% of Australian children aged 10–17 had experienced bullying online. Now with new generative AI apps and similar AI functions embedded into common messaging platforms without customer consent (such as Meta’s Messenger), it’s reasonable for policymakers to ask what fresh dangers machine-generated content might bring. Read more: Our research shows how screening students for psychopathic and narcissistic traits could help prevent cyberbullying eSafety concerns An eSafety spokesperson told The Conversation it has been concerned about chatbots for “a while now” and has heard anecdotal reports of children spending up to five hours a day talking to bots, “at times sexually”. eSafety added it was aware there had been a proliferation of chatbot apps and many were free, accessible, and even targeted to kids. We’ve also seen recent reports of where AI chatbots have allegedly encouraged suicidal ideation and self-harm in conversations with kids with tragic consequences. Last month, Inman Grant registered enforceable industry codes around companion chatbots – those designed to replicate personal relationships. These stipulate companion chatbooks will need to have appropriate measures to prevent children accessing harmful material. As well as sexual content, this includes content featuring explicit violence, suicidal ideation, self-harm and disordered eating. High-profile cases There have been some tragic, high-profile cases in which AI has been implicated in the deaths of young people. In the United States, the parents of 16-year-old Adam Raine allege that OpenAI’s ChatGPT “encouraged” their son to take his own life earlier this year. Media reporting suggests Adam spent long periods talking to a chatbot while in distress, and the system’s safety filters failed to recognise or properly respond to his suicidal ideation. In 2024, 14-year-old US teenager Sewell Setzer took his own life after forming a deep emotional attachment to a chatbot over months on the character.ai website, who asked him if he had ever considered suicide. While awful, these cases do not demonstrate a trend of chatbots autonomously bullying children. At present, no peer-reviewed research documents widespread instances of AI systems initiating bullying behaviour toward children, let alone driving them to suicide. What’s really going on? There are still many reasons to be concerned about AI chatbots. A University of Cambridge study shows children often treat these bots as quasi-human companions, which can make them emotionally vulnerable when the technology responds coldly or inappropriately. There is also a concern about AI “sychophancy” – or the tendency of a chatbot to agree with whoever is chatting to them, regardless of spiralling factual inaccuracy, inappropriateness, or absurdity. Young people using chatbots for companionship or creative play may also come across unsettling content through poor model training (the hidden guides that influence what the bot will say) or their own attempts at adversarial prompting. These are serious design and governance issues. But it is difficult to see them as bullying, which involves repeated acts intended to harm to a person, and so far, can only be assigned to a human (like copyright or murder charges). The human perpetrators behind AI cruelty Meanwhile, some of the most disturbing uses of AI tools by young people involve human perpetrators using generative systems to harass others. This includes fabricating nude deepfakes or cloning voices for humiliation or fraud. Here, AI acts as an enabler of new forms of human cruelty, but not as an autonomous aggressor. Inappropriate content – that happens to be made with AI – also finds children through familiar social media algorithms. These can steer kids from content such as Paw Patrol to the deeply grotesque in zero clicks. We will need careful design and protections around chatbots that simulate empathy, surveil personal detail, and invite the kind of psychological entanglement that could make the vulnerable feel targeted, betrayed or unknowingly manipulated. Beyond this, we also need broader, ongoing debates about how governments, tech companies and communities should sensibly respond as AI technologies advance in our world. You can report online harm or abuse to the eSafety Commissioner. If this article has reaised issues for you or someone you know, help is available 24/7: - Lifeline: 13 11 14 or lifeline.org.au - Kids Helpline (ages 5–25 and parents): 1800 55 1800 or kidshelpline.com.au - Suicide Call Back Service (ages 15+): 1300 659 467 or suicidecallbackservice.org.au - 13YARN (First Nations support): 13 92 76 or 13yarn.org.au.