文献检索文档翻译深度研究
Suppr Zotero 插件Zotero 插件
邀请有礼套餐&价格历史记录

新学期,新优惠

限时优惠:9月1日-9月22日

30天高级会员仅需29元

1天体验卡首发特惠仅需5.99元

了解详情
不再提醒
插件&应用
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
高级版
套餐订阅购买积分包
AI 工具
文献检索文档翻译深度研究
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2025

用于为患者、护理人员和普通公众提供通俗易懂的医学信息的生成式人工智能/大型语言模型:机遇、风险与伦理

Generative AI/LLMs for Plain Language Medical Information for Patients, Caregivers and General Public: Opportunities, Risks and Ethics.

作者信息

Pal Avishek, Wangmo Tenzin, Bharadia Trishna, Ahmed-Richards Mithi, Bhanderi Mayank Bhailalbhai, Kachhadiya Rohitbhai, Allemann Samuel S, Elger Bernice Simone

机构信息

Institute for Biomedical Ethics, University of Basel, Basel, Switzerland.

Patient Author, The Spark Global, Buckinghamshire, UK.

出版信息

Patient Prefer Adherence. 2025 Jul 31;19:2227-2249. doi: 10.2147/PPA.S527922. eCollection 2025.


DOI:10.2147/PPA.S527922
PMID:40771655
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC12325106/
Abstract

Generative artificial intelligence (gAI) tools and large language models (LLMs) are gaining popularity among non-specialist audiences (patients, caregivers, and the general public) as a source of plain language medical information. AI-based models have the potential to act as a convenient, customizable and easy-to-access source of information that can improve patients' self-care and health literacy and enable greater engagement with clinicians. However, serious negative outcomes could occur if these tools fail to provide reliable, relevant and understandable medical information. Herein, we review published findings on opportunities and risks associated with such use of gAI/LLMs. We reviewed 44 articles published between January 2023 and July 2024. From the included articles, we find a focus on readability and accuracy; however, only three studies involved actual patients. Responses were reported to be reasonably accurate and sufficiently readable and detailed. The most commonly reported risks were oversimplification, over-generalization, lower accuracy in response to complex questions, and lack of transparency regarding information sources. There are ethical concerns that overreliance/unsupervised reliance on gAI/LLMs could lead to the "humanizing" of these models and pose a risk to patient health equity, inclusiveness and data privacy. For these technologies to be truly transformative, they must become more transparent, have appropriate governance and monitoring, and incorporate feedback from healthcare professionals (HCPs), patients, and other experts. Uptake of these technologies will also need education and awareness among non-specialist audiences around their optimal use as sources of plain language medical information.

摘要

生成式人工智能(gAI)工具和大语言模型(LLMs)在非专业受众(患者、护理人员和普通公众)中越来越受欢迎,成为通俗易懂的医学信息来源。基于人工智能的模型有可能成为便捷、可定制且易于获取的信息来源,能够提高患者的自我护理能力和健康素养,并促进与临床医生的更多互动。然而,如果这些工具未能提供可靠、相关且易懂的医学信息,可能会产生严重的负面后果。在此,我们回顾已发表的关于使用gAI/LLMs相关的机遇和风险的研究结果。我们回顾了2023年1月至2024年7月期间发表的44篇文章。从纳入的文章中,我们发现重点在于可读性和准确性;然而,只有三项研究涉及实际患者。据报道,回答相当准确、具有足够的可读性且详细。最常报告的风险是过度简化、过度概括、对复杂问题的回答准确性较低以及信息来源缺乏透明度。存在伦理担忧,即过度依赖/无监督地依赖gAI/LLMs可能导致这些模型的“人性化”,并对患者健康公平性、包容性和数据隐私构成风险。为使这些技术真正具有变革性,它们必须更加透明,具备适当的治理和监督,并纳入医疗保健专业人员(HCPs)、患者和其他专家的反馈。这些技术的应用还需要非专业受众了解如何将其作为通俗易懂的医学信息来源进行最佳使用,提高相关教育和意识。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5e5d/12325106/af0b3dc6de89/PPA-19-2227-g0001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5e5d/12325106/af0b3dc6de89/PPA-19-2227-g0001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5e5d/12325106/af0b3dc6de89/PPA-19-2227-g0001.jpg

相似文献

[1]
Generative AI/LLMs for Plain Language Medical Information for Patients, Caregivers and General Public: Opportunities, Risks and Ethics.

Patient Prefer Adherence. 2025-7-31

[2]
Sexual Harassment and Prevention Training

2025-1

[3]
Adapting Safety Plans for Autistic Adults with Involvement from the Autism Community.

Autism Adulthood. 2025-5-28

[4]
How lived experiences of illness trajectories, burdens of treatment, and social inequalities shape service user and caregiver participation in health and social care: a theory-informed qualitative evidence synthesis.

Health Soc Care Deliv Res. 2025-6

[5]
Patient buy-in to social prescribing through link workers as part of person-centred care: a realist evaluation.

Health Soc Care Deliv Res. 2024-9-25

[6]
Stench of Errors or the Shine of Potential: The Challenge of (Ir)Responsible Use of ChatGPT in Speech-Language Pathology.

Int J Lang Commun Disord. 2025

[7]
"A System That Wasn't Really Optimized for Me": Factors Influencing Autistic University Students' Access to Information.

Autism Adulthood. 2025-4-3

[8]
Parents' and informal caregivers' views and experiences of communication about routine childhood vaccination: a synthesis of qualitative evidence.

Cochrane Database Syst Rev. 2017-2-7

[9]
"In a State of Flow": A Qualitative Examination of Autistic Adults' Phenomenological Experiences of Task Immersion.

Autism Adulthood. 2024-9-16

[10]
Interventions to improve safe and effective medicines use by consumers: an overview of systematic reviews.

Cochrane Database Syst Rev. 2014-4-29

本文引用的文献

[1]
Shaping the Future of Healthcare: Ethical Clinical Challenges and Pathways to Trustworthy AI.

J Clin Med. 2025-2-27

[2]
How AI Could Reshape Health Care-Rise in Direct-to-Consumer Models.

JAMA. 2025-5-20

[3]
Generative Artificial Intelligence Use in Healthcare: Opportunities for Clinical Excellence and Administrative Efficiency.

J Med Syst. 2025-1-16

[4]
Healthcare ethics and artificial intelligence: a UK doctor survey.

BMJ Open. 2024-12-20

[5]
Navigating the EU AI Act: implications for regulated digital medical products.

NPJ Digit Med. 2024-9-6

[6]
Co-Reasoning and Epistemic Inequality in AI Supported Medical Decision-Making.

Am J Bioeth. 2024-9

[7]
Mapping the regulatory landscape for artificial intelligence in health within the European Union.

NPJ Digit Med. 2024-8-27

[8]
Not all AI health tools with regulatory authorization are clinically validated.

Nat Med. 2024-10

[9]
A future role for health applications of large language models depends on regulators enforcing safety standards.

Lancet Digit Health. 2024-9

[10]
Current Strengths and Weaknesses of ChatGPT as a Resource for Radiation Oncology Patients and Providers.

Int J Radiat Oncol Biol Phys. 2024-3-15

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

推荐工具

医学文档翻译智能文献检索