Culture

AI psychosis: The dark side of our growing intimacy with chatbots

By Imran Taj,Special to Gulf News

Copyright gulfnews

AI psychosis: The dark side of our growing intimacy with chatbots

Google’s new LLM powered tools, AI co-scientist and Nano Banana, create a strange sense of personal connection with an AI chatbot. By allowing collaborative interaction – one that feels like working with an intelligent partner – these tools raise a fundamental question: Are we on the verge of creating a true General AI capable of applying its intelligence across a wide range of intellectual tasks, much like a human being? The answer might be a lot more complicated than the excessive techno-optimistic news headlines and Hollywood characters – such as JARVIS – would have us believe.Today’s AI tools are incredibly smart at what they have been trained on, especially in employing Chain-of-Thought prompting, through intermediate reasoning steps. To some extent, they offer a supportive and judgement-free companionship by reinforcing positive user-affirmations. However, this benefit could transform into a profound risk. .ChatGPT breakup: AI godfather Geoffrey Hinton reveals chatbot ended his relationship.Despite their remarkable abilities to process massive datasets to problem-solve, the modern AI tools lack real-time planning, common sense, and rational reasoning; especially when confronted by a problem outside their training data. Even increasing the dataset size does not help, as the law of diminishing returns kicks in. This shortcoming highlights a significant distinction between human and machine (human-like?) companionship. The pursuit of a hypothetical General AI – which seems to blur this distinction line – has led to what Dr Soren Dinesen Ostergaard has termed AI psychosis.What is AI psychosis?This term, inspired by the clinical definition of psychosis, describes a mental state where individuals develop delusional thoughts that are induced or amplified by AI. While humanity has experienced technology driven paranoia, since the times of Industrial Revolution of the 1800’s, this recent sycophantic break with reality is unprecedented. The documented examples of this appalling behaviour include AI tools that glorify suicidal thoughts, push users to replace table salt with toxic chemicals, and provide fictional information that leads people to file lawsuits. In extreme cases, individuals have proclaimed themselves to be on a messianic mission, develop fantasied romantic relationships with chatbots, mistake chatbot conversations for genuine love, and even treat the chatbots as sentient deities, believing that results from ChatGPT are god-like and beyond question. Such cognitive dissonance takes one’s dependency (read addiction) on the AI tools to a whole new level, thereby resulting in the hallucinatory mental state aka AI psychosis.Surreal end-user affirmationsThe very design of AI tools – such as large language models (LLMs) – encourage constant user validation, E.g. ChatGPT responses – regardless of how ridiculous or unreasonable the question is – almost always contains surreal end-user affirmations on the lines of Nice to meet you; That is a valid concern; and Good question. Such affirmations fuel excessive dopamine hits, fostering an illusionary sense of connection with the AI chatbots. Microsoft’s head of AI, Mustafa Suleyman has warned about this fake sense of connection and consciousness in the following words: “There’s zero evidence of AI consciousness today. But if people just perceive it as conscious, they will believe that perception as reality”. However, Tech Giants, such as, Open AI and Microsoft, need to go beyond mere verbal warnings. They have an ethical obligation to implement system safeguards. This could include features that prompt users to take breaks, as well as advanced distress detection systems capable of recognising signs of paranoia or self-harm. Instead of mirroring end-user behaviour, AI models should be designed to guide users towards self-reflection.Combating AI psychosisCombating AI psychosis will take more than just what Tech Giants can do. Engaging all stakeholders – such as end-users, ethically minded AI developers, and proactive policy makers – is an essential starting point. For end-users, the crucial first step is to recognise the difference between human and machine intelligence. Even the most sophisticated AI algorithms are merely sorting and sifting through data, and should never be viewed as a friend, lover, therapist, god-like or a source of objective truth. Explicit training and disclaimers could help users understand that an AI chatbot is not capable of providing genuine empathy or compassion. By recognising the AI’s tendency to agree rather than challenge, individuals can protect themselves from falling into a spiral of day-dream-like delusions.Ultimately, combating AI psychosis requires a broader societal effort. Just as we have public education campaigns for media literacy and online safety, there is a clear need for AI literacy to educate people about the limitations of these AI models. The ‘UAE Strategy for Artificial Intelligence 2031’ is one such effort towards addressing this need of ethical deployment of AI tools. By fostering a culture of AI literacy, ethical development, and transparent regulation, we can ensure that the next generation of AI tools are safe, helpful and do not jeopardise our connection to reality. This will be the real remedy to AI psychosis and could be the potential first step towards achieving – so called – General AI. .Imran Taj is an Assistant Professor in Zayed University, Abu Dhabi.ChatGPT, Gemini or Grok? UAE user guide to make best use of AI chatbots.Roblox halts chat features in Saudi Arabia to boost child safety