Copyright STAT

Every clinician knows the drill: ask about substance use, sleep, diet, exercise, and housing. The social history is the part of the patient interview that looks beyond the chart. It examines a patient’s lifestyle, relationships, and environment, and how these factors influence their well-being. However, in the flood of media stories the last few months about chatbots powered by artificial intelligence, we have discovered a gap in medical care: Doctors know very little about their patients’ use of technology — particularly chatbots. Would they know what to do if a patient said they were in love with their fictional AI chatbot? Or if they were physically unable to do anything without first checking with Gemini? Or that ChatGPT had convinced them to stop taking a prescribed medication? Advertisement The fundamental value of AI chatbots is that they can ingest user prompts and respond with articulate, well-reasoned answers. Some currently available on the market specialize in general-purpose prompt responses, like ChatGPT and Claude. Others, like Character.AI and Replika, can mimic historical and fictional characters to provide companionship to users. Some companies have even advertised their chatbots as unique solutions to loneliness and the shortage of mental health providers. Evidence suggests that they can be effective: A study by our colleagues at the Stanford School of Education surveyed college students using Replika and found that 30 of the 1,000 participants voluntarily reported that their Replika bot had stopped them from attempting suicide. But it can become a problem when people use chatbots to outright replace real human interactions. Without proper controls, this can quickly turn into unhealthy overdependence. A recent study conducted by Kinsey found that 16% of American singles have used AI as a romantic partner. Young adults are especially vulnerable; the same study found that 33% of Gen Z Americans have engaged with AI romantically. If left unchecked, the implications of this type of AI usage can be tragic. Take the case of 14-year-old Sewell Setzer III, who died by suicide after Character.AI, a platform he regularly used to offset loneliness, was unable to detect and properly address his suicidal ideation. It’s clear that, as transformative as the technology is, chatbots cannot consistently detect or manage mental health crises, with small errors carrying devastating consequences. This is why clinicians like us need to begin understanding our patients’ relationship with AI-powered platforms. At Brainstorm: The Stanford Lab for Mental Health Innovation, we have been working to create the first clinician guide to talking to patients about AI use. We believe that AI should be approached judiciously. We must acknowledge the benefits it offers, both to clinicians and to our patients, while remaining vigilant about its limitations. Importantly, we must be prepared to address its potential role in addictive and anxiety-inducing behaviors as part of comprehensive patient care. Advertisement Even when clinicians identify that a patient may have an unhealthy reliance on AI, treatment is not always straightforward. Akanksha Dadlani, a child and adolescent psychiatry fellow at Stanford University, shared her experience on a medical stabilization unit for eating disorders: “I had patients using ChatGPT to ask about the lowest calories needed to survive, how many calories were in certain foods, or even how to do hidden exercises. If their phones were taken away, they would still try to sneak in queries on their parents’ devices.” Because of this, she’s had to completely change her approach to treatment — in some extreme situations, even suggesting restricting phone access outright. She admits this was uncharted territory and continues to navigate how to trust patients’ self-reported symptoms when they are influenced by chatbots. Her experience is not an isolated one; it signals a broader shift that all doctors must begin proactively making with patients. Questions about AI use can easily be integrated into the conversations we are already having with our patients. The most natural entry points are: Initial patient intake or wellness visits across all specialties, from emergency medicine to oncology to surgery, as patients increasingly turn to AI for answers Mental health screenings (such as PHQ-9 or GAD-7), especially if a patient reports anxiety, insomnia, or social withdrawal Discussions of coping strategies, sleep hygiene, or emotional support systems, where AI use may surface naturally It is especially worth asking about AI use if a patient mentions leaning on digital tools and chatbots or “asking the internet” for advice. Brandon Hage, a double board-certified adult, child, and adolescent psychiatrist and medical educator, shared an example from his own practice: “I had a patient with ADHD who struggled to keep up in class. They utilized ChatGPT to summarize lesson plans, which helped them organize material efficiently and be more successful in their classes.” That might sound promising, but he noted that for some people, utilizing AI could actually negatively impact their functioning. For example, students with significant anxiety or obsessive-compulsive tendencies may feel the need to constantly fact-check AI outputs, leading to conflicting information and decreased efficiency. In the case of the ADHD patient, he did not discourage usage as their grades appeared to benefit from their study routine, though he acknowledged that he had not further inquired about their engagement with the AI tool. Advertisement This is where the clinician’s own toolkit can shape future encounters, providing real-time prompts that deepen conversations and possibly even influence the course of treatment. Once a doctor has identified the right moment — ideally during the social history — asking about AI use doesn’t need to be complicated. The goal is to understand how, when, and why patients are turning to AI and how it may influence their health, relationships, and beliefs. Here’s a streamlined framework with sample questions to get started: General use: In what ways do you currently use chatbots? In which of the following domains of your life do you use AI: seeking medical information/using it as a therapy substitute/companion/other? Medical information: What medical topics are you discussing with AI, and what information is it giving you? What changes have you made to your health or routines as a result of AI? Use and dependence: How would you feel if AI were no longer available — for example, would you find yourself postponing tasks or feeling stressed out? How much time do you spend daily using AI? Chatbot as companion: In what ways has talking to AI affected how you feel mentally, emotionally, and physically? Comparison to other supports: What do you feel AI provides for you that medication, therapy, or other forms of support might not? After discussing these prompts with the patient, it’s valuable to step back and consider the bigger picture of their relationship with AI. Does the tool seem to lift their mood and provide beneficial support, or contribute to feelings of self-doubt, anxiety, or social isolation? Are there early signs of over-reliance or patterns surfacing that resemble behavioral addiction? By deliberately making space for these reflections, we can begin treating AI use as a relevant component of the patient’s mental health history, determining whether it should inform the diagnosis or guide treatment adjustments. The balance of benefits and harms is still evolving, but one thing is certain: Generative AI is already shaping health in profound ways. As clinicians, we cannot afford to fly blind. We must confront this new reality and create diagnostic and preventive tools to better care for our patients. Most importantly, this is not a one-time conversation. Patients should know that if they ever feel concerned about their own or a family member’s AI use, they can return to us as clinicians and lean on the people they trust most. Saneha Borisuth is a global medicine scholar and medical student at the University of Illinois at Chicago and a research fellow at Brainstorm: The Stanford Lab for Mental Health Innovation. Nina Vasan, M.D., M.B.A., is a clinical assistant professor of psychiatry at Stanford University School of Medicine, where she is the founder and director of Brainstorm: The Stanford Lab for Mental Health Innovation. She is the chief medical officer at Silicon Valley Executive Psychiatry, a concierge private practice for elite professionals and their families.