AI chatbots gain traction among US doctors for clinical and administrative work
These tools are increasingly relied upon to summarize clinical literature and guide physicians through evolving guidelines
Specialized medical artificial intelligence chatbots are increasingly being integrated into healthcare, assisting clinicians with administrative work, research, documentation and diagnostic support, while prompting concerns over privacy, regulatory compliance and the limits of machine-based reasoning.
Doctors are using AI systems to help manage the volume of medical research published each year. As Dr. Jared Dashevsky, a resident physician at the Icahn School of Medicine at Mount Sinai, noted, "You'd need like 18 hours a day to stay up to date" with the millions of research papers published annually, reports CNN.
These tools are increasingly relied upon to summarize clinical literature and guide physicians through evolving guidelines. However, experts stress that general-purpose systems are not always suited for clinical use. "ChatGPT is like your crazy uncle," said Dr. Ida Sim, a professor at the University of California, San Francisco, highlighting concerns over reliability in non-specialized models.
Administrative tasks remain one of the most immediate areas of adoption. AI is being used to draft insurance correspondence for prior authorizations, reducing workloads that can otherwise take clinicians hours each week. In documentation, chatbots are used to generate summaries of patient encounters and hospital stays. Dr. Dashevsky argued that, "It's probably safer to have artificial intelligence review a hospital course and know everything happened, versus you as a human — with limited time, jumping between note to note — trying to put the pieces together," referring to the potential for AI to reduce omissions in complex cases.
In clinical decision support, medical students and physicians use AI to generate potential diagnoses based on patient data such as lab results and imaging. Evan Patel, a medical student at Rush University Medical College, said, "AI chatbots sort of help orient me to what possibilities it could be" when building a differential diagnosis.
Despite these benefits, experts caution against overreliance on automated systems. "People treat AI like it's magic. It's not magic. It can't just do anything you want," said Dr. Jonathan H. Chen, an associate professor at Stanford Medicine. He also noted variability in outputs, adding, "You ask the same question 10 times, and it'll give you 10 different answers."
Privacy concerns remain a significant issue, particularly with the use of unauthorized systems sometimes referred to as "shadow AI." These tools may not comply with healthcare privacy requirements. "'HIPAA compliance' is not an accurate term to use by any company," said Iliana Peters, a healthcare lawyer and former HIPAA enforcement lead for the US Department of Health and Human Services, underscoring regulatory ambiguity in the sector.
Experts also warn of data risks associated with patient information being uploaded to unapproved platforms. "Data is money," said Dr. Carolyn Kaufman, a resident physician at Stanford Medicine, highlighting concerns that sensitive information could be commodified.
Beyond privacy, clinicians emphasize that AI lacks the contextual judgment required in real-world medicine. "If we just apply guidelines, then replace us," said Dr. Ida Sim, pointing to the importance of clinical judgment beyond rule-based recommendations.
While AI is increasingly effective at managing medical "knowledge" and administrative "workflows," experts caution that it does not replicate human expertise in interpreting complex, evolving patient circumstances.
