Anatomical Pathology, Department of Pathology and Laboratory Medicine, Te Toka Tumai Auckland, Te Whatu Ora (Health New Zealand), Auckland, New Zealand.
Department of Pathology, Wexner Medical Center, The Ohio State University, Columbus, OH, USA.
Diagn Pathol. 2024 Feb 27;19(1):43. doi: 10.1186/s13000-024-01464-7.
The integration of large language models (LLMs) like ChatGPT in diagnostic medicine, with a focus on digital pathology, has garnered significant attention. However, understanding the challenges and barriers associated with the use of LLMs in this context is crucial for their successful implementation.
A scoping review was conducted to explore the challenges and barriers of using LLMs, in diagnostic medicine with a focus on digital pathology. A comprehensive search was conducted using electronic databases, including PubMed and Google Scholar, for relevant articles published within the past four years. The selected articles were critically analyzed to identify and summarize the challenges and barriers reported in the literature.
The scoping review identified several challenges and barriers associated with the use of LLMs in diagnostic medicine. These included limitations in contextual understanding and interpretability, biases in training data, ethical considerations, impact on healthcare professionals, and regulatory concerns. Contextual understanding and interpretability challenges arise due to the lack of true understanding of medical concepts and lack of these models being explicitly trained on medical records selected by trained professionals, and the black-box nature of LLMs. Biases in training data pose a risk of perpetuating disparities and inaccuracies in diagnoses. Ethical considerations include patient privacy, data security, and responsible AI use. The integration of LLMs may impact healthcare professionals' autonomy and decision-making abilities. Regulatory concerns surround the need for guidelines and frameworks to ensure safe and ethical implementation.
The scoping review highlights the challenges and barriers of using LLMs in diagnostic medicine with a focus on digital pathology. Understanding these challenges is essential for addressing the limitations and developing strategies to overcome barriers. It is critical for health professionals to be involved in the selection of data and fine tuning of the models. Further research, validation, and collaboration between AI developers, healthcare professionals, and regulatory bodies are necessary to ensure the responsible and effective integration of LLMs in diagnostic medicine.
大型语言模型(LLMs)如 ChatGPT 在诊断医学中的整合,特别是在数字病理学方面,引起了广泛关注。然而,了解在这种情况下使用 LLM 所涉及的挑战和障碍对于成功实施至关重要。
进行了范围综述,以探讨在诊断医学中使用 LLM 的挑战和障碍,重点是数字病理学。使用电子数据库(包括 PubMed 和 Google Scholar)全面搜索了过去四年发表的相关文章。对选定的文章进行了批判性分析,以识别和总结文献中报道的挑战和障碍。
范围综述确定了与在诊断医学中使用 LLM 相关的几个挑战和障碍。这些挑战和障碍包括上下文理解和可解释性方面的限制、训练数据中的偏差、伦理考虑、对医疗保健专业人员的影响以及监管方面的担忧。上下文理解和可解释性方面的挑战源于缺乏对医学概念的真正理解,并且这些模型没有经过专门针对由训练有素的专业人员选择的医疗记录进行训练,以及 LLM 的黑盒性质。训练数据中的偏差存在导致诊断中的差异和不准确问题的风险。伦理考虑因素包括患者隐私、数据安全和负责任的 AI 使用。LLM 的整合可能会影响医疗保健专业人员的自主权和决策能力。监管方面的担忧围绕着需要指导方针和框架来确保安全和道德的实施。
范围综述强调了在诊断医学中使用 LLM 的挑战和障碍,重点是数字病理学。了解这些挑战对于应对局限性和制定克服障碍的策略至关重要。医疗保健专业人员参与数据选择和模型微调至关重要。人工智能开发人员、医疗保健专业人员和监管机构之间需要进一步研究、验证和合作,以确保在诊断医学中负责任和有效地整合 LLM。