A few weeks ago, a medical researcher reported a rather curious case of AI-induced medical flub, detailing the case of an individual who consulted ChatGPT and went into a psychotic spiral due to bromide poisoning. Such concerns have been raised by experts for a while now, and it seems OpenAI aims to counter them with a dedicated ChatGPT mode for medical advice.
The tall hope
Tibor Blaho, an engineer at AI-focused firm AIPRM, shared an interesting snippet they spotted in the code of ChatGPT’s web app. The strings mention a new feature called “Clinician Mode.” The code doesn’t go into detail about how it works, but it seems like a dedicated mode for seeking health-related advice, just the way safety guardrails are implemented for teen accounts.
Notably, OpenAI hasn’t made any official announcements about any such feature, so take this news with the proverbial pinch of salt. But there are a few theories floating around on how it could work. Justin Angel, a developer who is a familiar name in the Apple and Windows community, shared on X that it could be a protected mode that limits the information source to medical research papers.
For example, if you seek any medical advice related to wellness issues or symptoms, ChatGPT will only give a response based on information it has extracted from trusted medical sources. That way, there are fewer chances of ChatGPT doling out misleading health-related advice.
The blunt reality
The idea behind something like “Clinician Mode” is not too far-fetched. Just a day ago, Consensus launched a feature called “Medical Mode.” When a health-related query is pushed, the conversational AI looks for answers “exclusively on the highest-quality medical evidence,” a corpus that includes over eight million papers and thousands of vetted clinical guidelines. The approach sounds safe on the surface, but the risks persist.
A paper published in the Scientific Reports journal last month highlighted the pitfalls of pushing ChatGPT in the medical context. “Caution should be exercised, and education provided to colleagues and patients, regarding the risk of hallucination and incorporation of technical jargon which may make the results challenging to interpret,” the paper warned. But it appears that the industry is moving steadily towards AI.
Recommended Videos