Pal Avishek, Wangmo Tenzin, Bharadia Trishna, Ahmed-Richards Mithi, Bhanderi Mayank Bhailalbhai, Kachhadiya Rohitbhai, Allemann Samuel S, Elger Bernice Simone
Institute for Biomedical Ethics, University of Basel, Basel, Switzerland.
Patient Author, The Spark Global, Buckinghamshire, UK.
Patient Prefer Adherence. 2025 Jul 31;19:2227-2249. doi: 10.2147/PPA.S527922. eCollection 2025.
Generative artificial intelligence (gAI) tools and large language models (LLMs) are gaining popularity among non-specialist audiences (patients, caregivers, and the general public) as a source of plain language medical information. AI-based models have the potential to act as a convenient, customizable and easy-to-access source of information that can improve patients' self-care and health literacy and enable greater engagement with clinicians. However, serious negative outcomes could occur if these tools fail to provide reliable, relevant and understandable medical information. Herein, we review published findings on opportunities and risks associated with such use of gAI/LLMs. We reviewed 44 articles published between January 2023 and July 2024. From the included articles, we find a focus on readability and accuracy; however, only three studies involved actual patients. Responses were reported to be reasonably accurate and sufficiently readable and detailed. The most commonly reported risks were oversimplification, over-generalization, lower accuracy in response to complex questions, and lack of transparency regarding information sources. There are ethical concerns that overreliance/unsupervised reliance on gAI/LLMs could lead to the "humanizing" of these models and pose a risk to patient health equity, inclusiveness and data privacy. For these technologies to be truly transformative, they must become more transparent, have appropriate governance and monitoring, and incorporate feedback from healthcare professionals (HCPs), patients, and other experts. Uptake of these technologies will also need education and awareness among non-specialist audiences around their optimal use as sources of plain language medical information.
生成式人工智能(gAI)工具和大语言模型(LLMs)在非专业受众(患者、护理人员和普通公众)中越来越受欢迎,成为通俗易懂的医学信息来源。基于人工智能的模型有可能成为便捷、可定制且易于获取的信息来源,能够提高患者的自我护理能力和健康素养,并促进与临床医生的更多互动。然而,如果这些工具未能提供可靠、相关且易懂的医学信息,可能会产生严重的负面后果。在此,我们回顾已发表的关于使用gAI/LLMs相关的机遇和风险的研究结果。我们回顾了2023年1月至2024年7月期间发表的44篇文章。从纳入的文章中,我们发现重点在于可读性和准确性;然而,只有三项研究涉及实际患者。据报道,回答相当准确、具有足够的可读性且详细。最常报告的风险是过度简化、过度概括、对复杂问题的回答准确性较低以及信息来源缺乏透明度。存在伦理担忧,即过度依赖/无监督地依赖gAI/LLMs可能导致这些模型的“人性化”,并对患者健康公平性、包容性和数据隐私构成风险。为使这些技术真正具有变革性,它们必须更加透明,具备适当的治理和监督,并纳入医疗保健专业人员(HCPs)、患者和其他专家的反馈。这些技术的应用还需要非专业受众了解如何将其作为通俗易懂的医学信息来源进行最佳使用,提高相关教育和意识。
Patient Prefer Adherence. 2025-7-31
Autism Adulthood. 2025-5-28
Health Soc Care Deliv Res. 2024-9-25
Cochrane Database Syst Rev. 2014-4-29
BMJ Open. 2024-12-20
NPJ Digit Med. 2024-9-6
Int J Radiat Oncol Biol Phys. 2024-3-15