By Contributor,Lance Eliot
Copyright forbes
A recent Stanford conference showcased the latest in precision mental health when combined with AI and LLMs.
In today’s column, I examine state-of-the-art advances in applying AI and LLMs to further attain precision mental health care. This is a rapidly evolving field. Clever innovations are happening at lightning-like speeds. I attended a recent conference at Stanford University that provided a cadre of excellent, leading-edge AI and clinical researchers and speakers involved in the depths of where precision mental health is heading.
Let’s talk about it.
This analysis of AI breakthroughs is part of my ongoing Forbes column coverage on the latest in AI, including identifying and explaining various impactful AI complexities (see the link here).
AI And Mental Health
As a quick background, I’ve been extensively covering and analyzing a myriad of facets regarding the advent of modern-era AI that involve mental health aspects. This rising use of AI has principally been spurred by the evolving advances and widespread adoption of generative AI and large language models. For a quick summary of some of my posted columns on this evolving topic, see the link here, which briefly recaps about forty of the over one hundred column postings that I’ve made on the subject.
There is little doubt that this is a rapidly developing field and that there are tremendous upsides to be had, but at the same time, regrettably, hidden risks and outright gotchas come into these endeavors too. I frequently speak up about these pressing matters, including in an appearance last year on an episode of CBS’s 60 Minutes, see the link here.
Precision Mental Health
You might be wondering what the field of precision mental health is all about.
MORE FOR YOU
In brief, precision mental health entails tailoring mental health care to the circumstances and characteristics of each individual who is undertaking mental health treatment (see my detailed definitional description at the link here). That depiction might seem like an obvious approach in the sense that shouldn’t therapy always be focused on the individual at hand. It should be, but that’s not necessarily what is taking place in the real world of mental health care.
Oftentimes, mental health professionals adopt a somewhat routinized approach that is based on averages and overall profiles, doing so via established research that has designated what is typical and expected. An individual seeking treatment is lumped into a statistical collective that suggests what they need, based on population-level studies. The person is viewed as akin to all others that fall into the same band or grouping. They almost become a number, as it were.
Fortunately, the advent of advanced AI has been opening the door to a more tailored approach to mental health care. By leaning into AI, researchers and therapists are customizing what will work best for each individual as they undergo diagnoses, treatment, and even preventative guidance.
In a research article entitled “Treatment Personalization and Precision Mental Health Care” by Danilo Moggia, Wolfgang Lutz, Eva-Lotta Brakemeier, and Leonard Bickman, Administration and Policy in Mental Health and Mental Health Services Research, August 2024, these salient points were made about the stirring rise of precision mental health (excerpts):
“The concept of personalizing mental health interventions to align with individual patients’ unique characteristics, needs, and circumstances has been a topic of longstanding interest in the field.”
“Clinical intuition probably represents the most commonly utilized resource for making these decisions.”
“With the advent of big data, machine learning, and artificial intelligence in recent years, personalization has moved towards the concept of precision mental health care.”
“Precision mental health care emphasizes using personalized data-driven strategies to optimize outcomes for a particular patient.”
Conference On Precision Mental Health
I recently attended an annual event at Stanford University on September 26, 2025, that explored the latest advances in the intertwining of precision mental health and advanced AI. The event was undertaken by the Stanford Center for Precision Mental Health and the Stanford School of Medicine, and constituted their 5th annual symposium on bridging psychiatry, neuroscience, and AI. For more info, see the link here.
The daylong event was jam-packed with fascinating and groundbreaking ideas and insights. Though I’d relish covering every presenter, space limitations dictate that I must selectively choose just a few to highlight. It’s a hard choice to make.
Kudos to all the researchers and presenters.
Circuit Biotypes At Scale
In a segment that was labeled as “Revolutionizing Precision Approaches to Cognitive and Mental Health: A Role for AI”, there was a fascinating presentation on “Circuit Biotypes at Scale: Making Precision Psychiatry Personal”. The presenter was Dr. Leanne Williams, Director, Stanford Center for Precision Mental Health, and Vincent VC Woo Professor of Psychiatry & Behavioral Sciences, Stanford University.
Allow me to convey in my own words and at a 30,000-foot level the nature of the research and the associated findings.
One of the most debilitating mental health issues is the rising prevalence of depression. People seem to be getting depressed on an increasingly alarming basis. You might be tempted to shrug off the potential harm of depression and suggest it is just a condition of life that we must all begrudgingly accept. The problem is that depression can have a much heavier and cascading impact than perhaps meets the eye at first glance.
The impact of depression shows up in poor work performance, it can undermine family life, and it can pervade nearly all facets of a person’s existence. Depression can be fleeting, but it also can be naggingly persistent, constantly coming and going. Diagnosing whether someone is significantly depressed is often done on a talking basis rather than coupled with biological indicators or biomarkers.
Dr. Williams described research whereby functional MRIs capture brain circuitry pathways, and via AI analyses, this can serve as a biological indicator associated with depression. In a study entitled “Developing Clinically Interpretable Neuroimaging Biotypes in Psychiatry”, by Ahn, J., Foland-Ross, L., Akiki, T. J., Boyar, L., Wydler, I., Bostian, C., Zhang, X., Yang, H. J., Ellsay, A., Ma, E., Rajasekharan, D., Holtzheimer, P., Lim, K., Madore, M., Philip, N., Ajilore, O., Ma, J., and Williams, L. M. Biological Psychiatry, 2025, these key points were made (excerpts):
“Despite available treatments, major depressive disorder (MDD) remains one of the leading causes of disability across medical conditions.”
“Lacking biological guidance, clinicians rely on trial-and-error prescribing.”
“This critical review synthesizes studies showing how functional MRI (fMRI) can predict treatment outcomes and identify which treatment is most effective for an individual based on their brain circuit profile.”
“We illustrate one such method: a theoretically informed approach that quantifies dysfunction across six large-scale brain circuits, relative to healthy reference norms. The resulting personalized circuit scores serve as predictors of response or failure and as moderators of differential treatment outcomes.”
This is a prime example of precision mental health care and its progress as a result of infusing AI into the process. An individual can be brain-scanned to identify biomarkers associated with possible depression, and the talk therapy side can incorporate those findings into a holistic means of gauging the extent of depression that might be at play, along with determining suitable forms of treatment.
Per the concluding remarks of Dr. Williams: “AI will enable us to accelerate, refine biotypes, discover new ones, and identify the match between biotypes and many more treatments.”
The Digital Twins Are Here
In a segment labeled as “Shaping the Future of Precision Mental Health with Human-Centered AI”, the chair of the segment, Dr. Kilian M. Pohl, Professor, Psychiatry & Behavioral Sciences and, by courtesy, of Electrical Engineering, Stanford University, provided insightful context on the importance of human-centered AI for precision mental health care.
A presentation by Dr. Tina Hernandez-Boussard, Professor, Medicine (Biomedical Informatics), Biomedical Data Science, and Surgery, Stanford University, was entitled “Advancing AI Research, Education, Policy, and Practice to Service the Collective Needs of Humanity”. My particular interest in the topic that was presented relates to the use of AI as a digital twin of mental health clients or patients.
Readers might recall that I’ve been extensively analyzing the topic of AI-based digital twins for mental health care, see the link here and the link here, just to name a few. The notion of an AI-based digital twin is relatively straightforward, though implementation can be quite challenging.
Historically, digital twins have been used for assembly plants and on-the-floor factory machinery, such that a simulation is devised that reflects the nature and details of the designated machine. Via the simulation, you can readily examine numerous crucial elements about the machine. You might want to use the digital twin to figure out how long the actual machine is likely to work without any breakdowns. Once a machine is inside the factory, you can still use the digital twin to identify how to overcome machine failures that arise.
We can extend this same digital twin conception to human beings and their mental conditions. An AI-based simulation that is a medical-oriented digital twin can be set up based on each particular individual. This is not one size fits all. The beauty of using AI is that tailoring becomes a viable avenue. Without advanced AI, the cost would undoubtedly be prohibitive, plus the time required to devise and maintain the digital twin would be extremely protracted.
Yes, you can construct an AI-based simulated version of a client or patient that a psychiatrist or therapist could then use to gauge potential responses and reactions to a planned line of psychological analyses and therapeutics. A proposed treatment can be tried out on the digital twin, doing so safely, before administering it to the individual. The digital twin consists of simulations associated with both the body and the mind, thus encompassing the physiological and the psychological characteristics.
I consider the emergence of AI-based digital twins for mental health care to be an enormously promising field of research and practice. Stay tuned as I continue my ongoing coverage of it.
Multi-Modal Ambient AI
In a presentation entitled “Precision Mental Health Meets Translational AI: Neural Mechanisms and Individualized Digital Biomarker Trajectories,” a clever, all-encompassing surround-sound style approach to precision mental health was discussed.
The presenter was Dr. Ehsan Adeli, Assistant Professor, Psychiatry & Behavioral Sciences, by courtesy, of Biomedical Data Science, and of Computer Science, Stanford University. I’ve previously covered his superb work on ambient intelligence and its use in psychiatry and mental health, see the link here. His latest research extends this focus and provides a vividly illustrative scenario.
Imagine that a prospective client or patient has come to you on a pre-visit basis. The person seeking mental health care claims they are suffering from depression and anxiety. You interact with the individual. Perhaps you have them fill in various questionnaires that are standardized for assessing depression, anxiety, and the like (e.g., PHQ-9, GAD-7).
Do you think that you can make a reasoned decision about the state of the individual and whether and to what degree they ought to be referred to a clinician for a full diagnosis and potential therapeutic treatment?
Well, you could try. The difficulty is that you have so far somewhat sparse data to base your decision on. There are the questionnaire answers. There is the face-to-face interaction that you had with the person. Not much more seems to be at play.
Aha, there is indeed a lot more data that could be included in the status-determining process. Suppose we captured the video and audio of the person during this pre-visit. Using AI, we could analyze how the person interacted, the tone of their voice, the manner and nature of the words spoken, even the gait of their walk and movement while undergoing the pre-visit.
If the person happens to be wearing a smartwatch or carrying a smartphone, there is a chance that additional data about their biological status would be available. This data could be analyzed by AI. All told, the AI could be devised to work on a multi-modal basis, bringing together a coherent review of the full gamut of all the data at hand.
How far can this be extended?
Our homes are going to gradually be outfitted as smart homes. There are ambient and contactless sensors, such as air quality detection devices, LIDAR, cameras, smart lighting, smart thermostats, etc. Those are potential data sources that encompass the person and people living in and interacting within the domicile.
People are increasingly wearing specialized wearable sensors, including smartphones, smartwatches, smart pins, smart jewelry, and so on. Those are also potential data sources for gauging a person’s activity and body/mind status.
I find this research avenue to be very significant and timely. To date, generative AI and LLMs tend to be mono-modal instead of multi-modal. You interact with a text-based generative AI, but it doesn’t readily integrate video, audio, and other data modalities. Inch by inch, this is changing, and we are entering into an AI multi-modal world.
Leveraging multi-model AI for precision mental health is a smart move and reflects the soon-to-be world in which we will be living.
AI Is In The Room These Days
In a presentation entitled “AI for Detection and Support of Mental Health Needs,” the presenter, Dr. Jonathan Chen, Assistant Professor, Medicine (Biomedical Informatics), and Biomedical Data Science, Stanford University, undertook a lively discussion on whether AI has opened Pandora’s box regarding mental health analyses or instead that AI might be a fountain of creativity and useful insights.
I’ll momentarily share with you his mesmerizing personal story that he told.
First, loyal readers are well aware that I have extensively covered the ins and outs, the upsides and downsides, and the latest trends in AI for mental health. Comparisons between human therapists and AI-based therapy often narrowly assume that a binary-choice dichotomy must exist, namely that someone either sees a human therapist or they confer with AI instead. I’ve repeatedly emphasized that therapy is being dramatically disrupted, transforming the classic dyad of therapist-patient to become a triad of therapist-AI-patient, see my coverage at the link here.
AI has already entered the therapy room.
Clients come to see their therapist and are armed with lengthy chats via AI that they want the therapist to review and incorporate into the therapy being undertaken (see my discussion at the link here). New laws such as the recently enacted Illinois state law, Nevada law, and Utah law, are putting restrictions on not only consumer use of AI for mental health, but are taking further steps by limiting how even therapists can use AI in their therapy endeavors (see my analysis of these new laws, the Illinois law at the link here, Nevada law at the link here, and the Utah law at the link here).
Not everyone agrees that this kind of ban associated with AI for mental health is a suitable direction for society; we might be (per the famed adage) cutting off our nose to spite our face.
Personal Story With Lessons Afoot
Returning to the presentation by Dr. Chen, he gave a personal story that touched the hearts and minds of the attendees. The story involves his work as a doctor and the circumstances of a wife who told him her husband of many years has been choking on his own food. She explained that her husband now has pneumonia. His dementia is getting worse, too. Other doctors had advised her to have her husband fed by a permanent feeding tube. The option was nearly unbearable to consider, but what else can she do?
Ponder what you might say to this exceedingly distressed and caring wife.
Dr. Chen offered his advice to the woman. Later that night, he decided to see what ChatGPT might say. The matter was weighing heavily on his mind, and he figured that his answer had been as best feasible in the moment of responding to the woman. Certainly, he assumed, generative AI would have done worse.
He wrote about this encounter in an article entitled “Who’s Training Whom” published in Stanford Medicine Magazine, November 2023, and made these key points (excerpts):
“Thinking up more challenging scenarios, I came up with the opening prompt of this essay based on my recent experience counseling a woman on the plan to place a feeding tube in her husband, who had advancing dementia.”
“This was a particularly challenging (but unfortunately common) scenario, with strong emotions and competing goals between avoiding harm from medical interventions unlikely to help versus an instinctive human need to feel that further treatments should always be continued.”
I wondered what the chatbot would have come up with.”
“I then pretended to be the patient’s wife, posing my dilemma with the opening question above.”
“Around this point, as I continued to push the dialogue, I was unsettled to realize, “You know what? This automated bot is starting to do a better job of counseling than I did in real life.”
I bring up this poignant observation because many seem to assume that AI for mental health is inherently inferior to any form of human-provided therapy. Nope. You see, human therapists are humans. They can inadvertently miss the mark. Assumptions that human therapists are the stuff of perfection are not validated in real life.
To clarify, I am not saying that AI is perfect either. Nope. You almost certainly know that AI can encounter so-called AI hallucinations and participate in the co-creation of delusions when interacting with users (see my coverage at the link here).
That’s also why OpenAI has opted to start forming its own network of online human therapists. The idea is simple. When AI cannot handle a mental health situation, route the user to a human therapist. Therapists become a suitable backstop. I’ve predicted that this is going to create a huge demand for therapists, since AI is going to be making referrals in the millions and possibly billions of potential instances (see my analysis at the link here).
Precision Mental Health Is Underway
Precision mental health is still in its early stages. As AI advances, you can bet your bottom dollar that precision mental health will readily expand and mature. We are in an exciting moment in time.
If you are a psychologist, psychiatrist, therapist, mental health professional, or researcher, it would be wise to pay due attention to what is happening in the realm of precision mental health as combined with AI. Don’t fall behind. Do not falsely believe this is some fad. It is real and has staying power.
As Eleanor Roosevelt notably remarked: “The future belongs to those who believe in the beauty of their dreams.” In this case, the dream is to vastly improve mental health care, and the miraculous catalyst will be AI.
Editorial StandardsReprints & Permissions