Big Tech Under Scrutiny: FTC Investigates OpenAI, Meta, Alphabet, xAI, and Snap over AI Chatbot Safety for Children
By Priya Pathak
Copyright republicworld
People are talking about social media and how it affects young people’s minds once again. This time, it’s not just about Instagram likes or Snapchat streaks; it’s also about AI chatbots that are quickly becoming “digital companions.”People have always questioned how social media affects kids’ thoughts, actions, and even sleep. Now, regulators are paying attention to the emerging wave of chatbots. The US Federal Trade Commission (FTC) has told seven big tech companies, including Google (Alphabet), OpenAI, Meta (Instagram), Snap, xAI, and Character, to give detailed information about how kids and teens are using their chatbots.The regulator wants to know if these bots are safe when they act like friends, what safety measures are in place for kids, how user data is gathered or shared, and if corporations make money by keeping kids interested.OpenAI has said that it wants ChatGPT to be helpful and safe for everyone and has assured that it will work closely with the FTC. Snap also agreed with the focus on safe AI and said it looks forward to working together. Meta, Alphabet, and xAI haven’t said anything yet.Recent research highlights that AI chatbots and social media offer both risks and benefits for children, prompting regulatory concern. On one hand, these tools can provide educational support, acting as personalised tutors, and offer a sense of companionship, with some studies showing high rates of teen use for emotional connection and academic help.However, the risks are significant. AI systems can lead to emotional dependency and social withdrawal, as children may become overly attached to a non-human entity, hindering their development of real-world social skills and critical thinking. The constant, unfiltered interaction can blur the lines between real and simulated connections. There are also serious concerns about misinformation and manipulation, as AI may provide inaccurate or even harmful advice on sensitive topics. This underscores the urgent need for robust digital literacy education, stricter age safeguards, and greater parental awareness to mitigate the potential dangers associated with these powerful tools.Read More: OpenAI and Nvidia to Announce UK Data Center Push; India in Line for 1GW