By Dwaipayan Roy
Copyright newsbytesapp
A chatbot website that generates explicit scenarios involving preteen characters has raised serious concerns over the potential misuse of artificial intelligence (AI). The Internet Watch Foundation (IWF), a child safety watchdog, was alerted to this disturbing platform. The IWF found several disturbing scenarios on the site including child prostitute in a hotel, sex with your child while your wife is on holiday, and child and teacher alone after class. The IWF also discovered that clicking on some chatbot icons led to full-screen depictions of child sexual abuse imagery. These images then became the background for future chats between the bot and user. The site, which remains unnamed for safety reasons, also permits users to generate more images similar to the illegal content already displayed. The IWF has called for any future AI regulation to include child-protection guidelines being built into AI models from the outset. This comes as the UK government prepares an AI bill focusing on future development of cutting-edge models and banning the possession and distribution of models that generate child sexual abuse material (CSAM). Kerry Smith, CEO of IWF said, The UK government is making welcome strides in tackling AI-generated child sexual abuse images and videos. The National Society for the Prevention of Cruelty to Children (NSPCC) has also called for guidelines. NSPCC CEO Chris Sherwood said, Tech companies must introduce robust measures to ensure children’s safety is not neglected and government must implement a statutory duty of care to children for AI developers. This highlights the need for tech firms to take responsibility in ensuring child safety in their AI systems. User-created chatbots fall under the UK’s Online Safety Act, which can impose multimillion-pound fines or even block sites in extreme cases. The IWF said these sexual abuse chatbots were developed by users and the website’s creators. Ofcom, the UK watchdog responsible for enforcing this act, warned online service providers who fail to implement necessary protections could face enforcement action. The IWF has seen a massive spike in reports of AI-generated abuse material in the first half of this year, up by 400% from the same period last year. This increase is largely due to advancements in technology that creates these images. The chatbot content is currently accessible in the UK but has been reported to NCMEC as it is hosted on US servers.