Business

The Danger of Imperfect AI: Incomplete Results Can Steer Cancer Patients in the Wrong Direction

The Danger of Imperfect AI: Incomplete Results Can Steer Cancer Patients in the Wrong Direction

Cancer patients cannot wait for us to perfect chatbots or AI systems. They need reliable solutions now—and not all chatbots, at least so far, are up to the task.
I often think of the dedicated and overworked oncologists I have interviewed who find themselves drowning in an ever-expanding sea of data, genomics, imaging, treatment trials, side-effect profiles, and patient co-morbidities. No human can process all of that unaided. Many physicians, in an understandable and even laudable effort to stay afloat, are turning to AI chatbots, decision-support models, and clinical-data assistants to help make sense of it all. But in oncology, the stakes are too high for blind faith in black boxes.
AI tools offer incredible promise for the future, and AI-augmented decision systems can improve accuracy. One integrated AI agent increased decision accuracy from 30.3% to 87.2% compared to the baseline of the GPT-4 model. Clinical decision AI systems in oncology already assist in treatment selection, prognosis estimates, and synthesizing patient data. In England, for example, an AI tool called “C the Signs” helped boost cancer detection in GP practices from 58.7% to 66.0%. These are encouraging steps.
Anything below 100 percent is not enough when life is at stake. Cancer patients cannot afford to wait for us to resolve the issues these technologies still have. We risk something far worse than delay; we risk bad decisions born from incomplete, outdated, or altogether fabricated information.
One of the worst issues is “AI hallucination.” These are cases where the AI has been found to present false information, invented studies, nonexistent anatomical structures, and incorrect treatment protocols. In one shocking example, Google’s health AI misdiagnosed damage to a “basilar ganglia,” an anatomical part that doesn’t exist. The confidently presented output looked authoritative until physicians recognized the error.
Recent testing of six leading models, including OpenAI and Google’s Gemini, revealed just how unreliable these systems can be in medicine. They produced confident, step-by-step explanations that looked persuasive but were riddled with errors, ranging from incomplete logic to entirely fabricated conclusions. In oncology, where every patient is an outlier, that margin of error is unacceptable. Even specialized medical chatbots, which may sound authoritative, still present opaque and untraceable reasoning—their sources inconsistent, and their statistics often meaningless. This is decision distortion.
The legal and ethical implications are real. If a treatment based on AI guidance causes harm, who is liable? The physician? The hospital? The AI developer? Medical-legal frameworks are scrambling to catch up, with some warning that overreliance on AI without human oversight could itself constitute negligence.
The problem of AI hallucination extends beyond the medical realm. In the legal world, AI hallucinations have already led to serious consequences: in at least seven recent cases, courts disciplined lawyers for citing fake case law generated by AI. In one high-profile case, Morgan & Morgan attorneys were sanctioned after submitting motions containing bogus citations. If courts are demanding accountability for AI mistakes in law, how long before the medical malpractice lawsuits start being filed?
In oncology, especially, reliance on AI amplifies risk because of how the tools are trained. Many large language models or decision systems depend on fixed journal cohorts or curated datasets. New oncology breakthroughs may remain outside that training collection for months or years. When we query such a system, it may omit the newest trial, ignore emerging biomarkers, or default to an outmoded standard of care. When AI invents studies or hallucinates efficacy, and doctors rely on it, patients pay the price.
Moreover, cutting-edge medical data is often fragmented, diversified, and non-standardized; imaging formats differ, electronic health record notes are not uniform, and rare biomarkers may exist only in supplementary data. AI does best with well-structured, consistent data; it struggles with the disorder at the frontier of research. That means decisions about novel or borderline cases may be precisely where AI is least reliable.
I’m not arguing that we scrap AI in cancer care. On the contrary, we must keep developing these tools, pushing boundaries, harnessing the power of computation to spot patterns no human sees. But we must not hand over ultimate decision-making authority to them, at least not yet.
Cancer patients deserve better than experiments. They deserve human physicians who remain in the loop, who audit, challenge, and interrogate AI outputs. We need an architecture of human and AI collaboration. When a chatbot suggests a regimen, the oncologist should review supporting evidence, check for newly published trials, and confirm that the model’s assumptions match the patient’s specifics. The physician must own the decision.
We can establish effective guardrails by implementing regular validation of AI systems with updated clinical data. By promoting transparency in training sources and mandating human review of all AI-suggested decisions, we can enhance overall trust in these technologies. Additionally, developing clear liability rules will help ensure accountability and foster responsible innovation. In practice, that means clinics deploying AI decision tools should monitor AI output, compare outcomes, run audits, and allow physicians to override or correct AI suggestions.
We must also push for standardization of data, sharing across institutions, open and timely inclusion of new studies, and rigorous mechanisms to flag contradictions or hallucinations. Without that, the models will always lag the frontier.
Cancer patients cannot wait for us to achieve AI perfection. But they deserve the best possible care now, and that requires that we never quit human responsibility in the name of speed. AI must serve as an assistant, not a dictator. Humans are in charge of deliberation and decision-making, and they must always prioritize caution when faced with unverified or ambiguous algorithms.
AI chatbots are tools, not authorities. When we start letting algorithms decide instead of doctors, we have crossed from medicine into potential malpractice. Cancer patients don’t need perfect chatbots. They don’t have the time for the technology to catch up, and they cannot afford doctors who make decisions based on incomplete or outdated information. For patients and their families, the stakes are too high, and they deserve a much higher standard of care.
About the Author:
Anna Forsythe, PharmD, MBA, is the Founder and President of Oncoscope-AI, a pioneering platform transforming access to real-time oncology insights. With advanced degrees in pharmacy, health economics, and business, she has led global roles in pharma, co-founded Purple Squirrel Economics, and published extensively in clinical research and health economics.