Science

AI is changing how we learn medicine, not always in a good way

AI is changing how we learn medicine, not always in a good way

“Review these patients’ charts. Perform a brief history at the bedside. Prepare an oral presentation for rounds at 9 a.m.”
The instructions from my supervising doctor are clear. My challenge is time.
As a third-year medical student on my internal medicine rotation at Cooper University Hospital in Camden, my mornings are time pressed. I usually have one critical hour after receiving patient assignments to understand the patients’ cases well enough to present a treatment plan for them during hospital rounds with my supervising physician and peers. Each day tests my ability to multitask under pressure.
With multiple patients to see, labs to review, and plans to formulate, efficiency is crucial, so I find myself turning to a powerful tool: ChatGPT.
Generative AI platforms such as ChatGPT have rapidly become widely used in both healthcare and beyond. Searching through medical resources for clinical information can be time consuming. Now, with a few simple keywords, I can generate organized fact sheets about diseases, complete with recommended physical exam maneuvers, diagnostic tests to consider, and key questions I should be prepared to answer during rounds.
What would have taken me a half hour to prepare now takes minutes, giving me extra time to talk with patients face-to-face at the bedside. The results appear to be win-win, but there’s a catch.
While AI tools like ChatGPT have rapidly become sophisticated enough that studies show some versions could perform at a passing level on the U.S. Medical Licensing Exam, they remain imperfect. They can generate false information and draw from dubious sources. I remember asking an earlier version for citations to support its claims, only to discover that the references were completely fabricated.
My greatest concern as a medical student is that relying on these tools in the clinic may come at the expense of my own learning. Efficiency and time management are essential in the hospital, but leaning too heavily on AI risks shortcutting the hard work needed to truly master the art and science of medicine.
For example, morning rounds often see the attending physician firing off questions to test students’ knowledge of diseases and treatments. My ability to answer can directly affect my evaluations and, ultimately, grades.
When the questioning begins, the pressure is on. One morning before rounds, I asked ChatGPT to generate a list of likely questions on the conditions I was preparing to present. I reviewed them quickly, and to my surprise, AI anticipated almost word-for-word the questions posed by my attending physician that day.
I answered them correctly. Yet as I walked away, I couldn’t help but wonder: Was that success the result of my preparation, or the predictive power of AI?
Despite its risks, AI is here to stay. Harvard Medical School, for example, now offers an introductory course on AI in healthcare for all students in its health sciences and technology track.
There’s little doubt that AI will play a major role in the future of medicine, and its influence is already shaping how students think about their careers.
Procedural specialties like surgery seem less threatened by automation and hold clear appeal. By contrast, diagnostic fields such as radiology — heavily reliant on image interpretation that AI performs increasingly well — are viewed with growing skepticism. One recent study found that AI has a significantly negative effect on students’ interest in radiology. I hear that sentiment in student lounge conversations, too.
My experience with AI tools during my internal medicine rotation, and medical school more broadly, has left me torn about their role in my education.
They have helped me prepare for rounds, review the steps of surgical procedures before entering the OR, and even generate diagrams and mnemonics for board exam prep.
Yet the same tools can encourage shortcuts, undermine aspects of the natural learning process, and too easily become a crutch.
AI has immense potential to help me and my classmates become better doctors, but only if we remember that in the end, it’s not ChatGPT that stands before the patient. It’s us.