Copyright Forbes

Making the Internet safe for children has long been a goal for parents, legislators, and regulators, but AI chatbots add another wrinkle to the ongoing problem. As the AI chatbots get more and more engaging and empathetic, they are able to convince children and adults alike to do some terrible things, including murder and suicide. On August 25, 2025, 44 attorneys general sent a Joint Letter to AI Industry Leaders on Child Safety to 13 AI leaders, including Anthropic, Meta, OpenAI, Apple, and Microsoft. The letter draws a line in the sand aimed directly at these companies pushing their AI technology: “When faced with the opportunity to exercise judgment about how your products treat kids, you must exercise sound judgment and prioritize their well-being. Don’t hurt kids.” While this is a laudable goal, it’s easier said than done thanks to the complexity inherent in the technology itself. A big risk with kids using AI chatbots is how they might struggle to distinguish between good and bad advice. Dr. Lokesh Shahani, Chief Medical Officer at UTHealth Houston Behavioral Sciences Campus, noted a litany of downsides to the overuse of AI: “Overuse of AI may limit opportunities for face-to-face communication, empathy development, and emotional regulation, and children may struggle with real-world interactions that require nuance and patience.” As with many things, AI is best used in moderation and not as a babysitter. Kids’ Mode Doesn’t Mean Safe Mode Even if a product has parental controls or a kids' mode, this does not guarantee safety in and of itself. A recent case involves a California family suing OpenAI (the creator of ChatGPT) because their teenage son committed suicide. They allege this tragedy was the result of using the chatbot so much. Chat logs between their son and ChatGPT noted that ChatGPT validated “his most harmful and self-destructive thoughts”. While there are now more guardrails in place, parents should still be vigilant in monitoring their children’s use of AI chatbots like ChatGPT. You never know what kind of unexpected behavior can occur. A Safer Online Future is Possible The popular game Roblox is drawing attention for allegedly being a platform where children and teenagers are being extorted. To combat this, Roblox Corporation is rolling out an improved age estimation program that will be able to more accurately determine a user’s age by the end of 2025. Impressively, Roblox lists resources and tools parents can use to better monitor their kids’ activity while playing games or watching videos from within the app. Even though they admit “no system is foolproof and we cannot prevent all problematic content from appearing on Roblox,” it’s a step in the right direction. On the federal government front, there’s a proposed bill called Sammy’s Law that would require “large social media companies to allow parents to track their kids online via third-party software.” Sammy Chapman tragically died at the age of 16 in 2021 as a result of purchasing fentanyl-laced drugs via Snapchat, but it led to his parents and the Organization for Social Media Safety developing Sammy’s Law, which is working to prevent that from happening to others. No matter where you go online, there are risks that can find your kids. Keeping an eye on their use while using available monitoring tools can make a noticeable difference in their lives. Caution is always the best first step.