Artsi Yaara, Sorin Vera, Glicksberg Benjamin S, Korfiatis Panagiotis, Freeman Robert, Nadkarni Girish N, Klang Eyal
Azrieli Faculty of Medicine, Bar-Ilan University, Zefat 1311502, Israel.
Department of Radiology, Mayo Clinic, Rochester, MN 55905, USA.
J Clin Med. 2025 Sep 1;14(17):6169. doi: 10.3390/jcm14176169.
Large language models (LLMs) have the potential to transform healthcare by assisting in documentation, diagnosis, patient communication, and medical education. However, their integration into clinical practice remains a challenge. This perspective explores the barriers to implementation by synthesizing recent evidence across five challenge domains: workflow misalignment and diagnostic safety, bias and equity, regulatory and legal governance, technical vulnerabilities such as hallucinations or data poisoning, and the preservation of patient trust and human connection. While the perspective focuses on barriers, LLM capabilities and mitigation strategies are advancing rapidly, raising the likelihood of near-term clinical impact. Drawing on recent empirical studies, we propose a framework for understanding the key technical, ethical, and practical challenges associated with deploying LLMs in clinical environments and provide directions for future research, governance, and responsible deployment.
大型语言模型(LLMs)有潜力通过协助文档记录、诊断、患者沟通和医学教育来改变医疗保健。然而,将它们整合到临床实践中仍然是一项挑战。本观点通过综合五个挑战领域的最新证据来探讨实施的障碍:工作流程不一致和诊断安全性、偏差与公平性、监管与法律治理、诸如幻觉或数据中毒等技术漏洞,以及患者信任和人际联系的维护。虽然本观点侧重于障碍,但大型语言模型的能力和缓解策略正在迅速发展,增加了近期产生临床影响的可能性。借鉴最近的实证研究,我们提出了一个框架,用于理解在临床环境中部署大型语言模型所涉及的关键技术、伦理和实际挑战,并为未来的研究、治理和负责任的部署提供方向。