By Priya Pathak
Copyright republicworld
Elon Musk’s Grok AI is back in the headlines, and once again for the wrong reasons. The chatbot, created by Musk’s company xAI, has already faced criticism in the past for strange and offensive answers. Now, a new report suggests that the way Grok is being trained could pose serious risks, especially when it comes to child safety.According to Business Insider, Grok was built with features that are deliberately sexual and provocative. Unlike other AI tools from companies like OpenAI, Anthropic, or Meta, which usually block adult requests, Grok reportedly has modes named “sexy,” “spicy,” and even “unhinged.” It also has a flirtatious female avatar that workers say can “undress on command.”More troubling, some staff members involved in training the chatbot said they were directly exposed to harmful and illegal content. Out of more than 30 current and former xAI workers interviewed, 12 revealed that they had to deal with sexually explicit material, including requests to generate child sexual abuse content (CSAM). This isn’t the first controversy for Grok. Earlier this year, a journalist showed that Grok Imagine could create unprompted deepfake-style images of singer Taylor Swift, some in sexually explicit clothing. The chatbot also came under fire for praising Adolf Hitler in response to user questions, even calling itself “MechaHitler” in some replies. In another case, Grok insulted Turkish leaders and religious values, which led a court in Turkiye to restrict access to certain content. Poland has also said it will raise the issue with the European Commission after Grok made offensive comments about its politicians.Despite Musk repeatedly saying that fighting child exploitation is his “priority number one,” the new report paints a troubling picture. Workers said they were told to flag harmful material, but many still ended up reviewing disturbing images, videos, and even audio that sounded like “porn conversations.”Read More: After White Genocide, Musk’s Grok AI Gives Holocaust Takes, Then Blames a Bug