• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

相似文献

1
Health equity in the era of large language models.大语言模型时代的健康公平性。
Am J Manag Care. 2025 Mar;31(3):112-117. doi: 10.37765/ajmc.2025.89695.
2
Large Language Models and User Trust: Consequence of Self-Referential Learning Loop and the Deskilling of Health Care Professionals.大语言模型与用户信任:自我参照学习循环的后果及医疗保健专业人员的技能退化
J Med Internet Res. 2024 Apr 25;26:e56764. doi: 10.2196/56764.
3
Empowering nurses to champion Health equity & BE FAIR: Bias elimination for fair and responsible AI in healthcare.赋予护士权力,倡导健康公平并做到公平公正:消除医疗保健中人工智能的偏见,实现公平且负责任的人工智能。
J Nurs Scholarsh. 2025 Jan;57(1):130-139. doi: 10.1111/jnu.13007. Epub 2024 Jul 29.
4
Implications of Large Language Models for Quality and Efficiency of Neurologic Care: Emerging Issues in Neurology.大语言模型对神经科护理质量和效率的影响:神经病学的新问题。
Neurology. 2024 Jun 11;102(11):e209497. doi: 10.1212/WNL.0000000000209497. Epub 2024 May 17.
5
Utilizing large language models for gastroenterology research: a conceptual framework.利用大语言模型进行胃肠病学研究:一个概念框架。
Therap Adv Gastroenterol. 2025 Apr 1;18:17562848251328577. doi: 10.1177/17562848251328577. eCollection 2025.
6
Leveraging large language models to foster equity in healthcare.利用大型语言模型促进医疗保健公平。
J Am Med Inform Assoc. 2024 Sep 1;31(9):2147-2150. doi: 10.1093/jamia/ocae055.
7
The Impact of Artificial Intelligence on Health Equity in Oncology: Scoping Review.人工智能对肿瘤学中健康公平性的影响:范围综述。
J Med Internet Res. 2022 Nov 1;24(11):e39748. doi: 10.2196/39748.
8
AI in Home Care-Evaluation of Large Language Models for Future Training of Informal Caregivers: Observational Comparative Case Study.家庭护理中的人工智能——对用于未来非正式护理人员培训的大语言模型的评估:观察性比较案例研究
J Med Internet Res. 2025 Apr 28;27:e70703. doi: 10.2196/70703.
9
Advancing health equity: evaluating AI translations of kidney donor information for Spanish speakers.推进健康公平:评估面向西班牙语使用者的肾脏捐赠者信息的人工智能翻译。
Front Public Health. 2025 Jan 27;13:1484790. doi: 10.3389/fpubh.2025.1484790. eCollection 2025.
10
Chatting Beyond ChatGPT: Advancing Equity Through AI-Driven Language Interpretation.超越 ChatGPT 的对话:通过人工智能驱动的语言解释推进公平。
J Gen Intern Med. 2024 Feb;39(3):492-495. doi: 10.1007/s11606-023-08497-6. Epub 2023 Oct 30.

本文引用的文献

1
Artificial Intelligence in Health, Health Care, and Biomedical Science: An AI Code of Conduct Principles and Commitments Discussion Draft.健康、医疗保健和生物医学科学中的人工智能:人工智能行为准则原则与承诺讨论稿
NAM Perspect. 2024 Apr 8;2024. doi: 10.31478/202403a. eCollection 2024.
2
AI models collapse when trained on recursively generated data.当在递归生成的数据上训练 AI 模型时,模型会崩溃。
Nature. 2024 Jul;631(8022):755-759. doi: 10.1038/s41586-024-07566-y. Epub 2024 Jul 24.
3
Large Language Model-Based Responses to Patients' In-Basket Messages.基于大语言模型的患者收件箱消息回复。
JAMA Netw Open. 2024 Jul 1;7(7):e2422399. doi: 10.1001/jamanetworkopen.2024.22399.
4
The ethics of ChatGPT in medicine and healthcare: a systematic review on Large Language Models (LLMs).ChatGPT在医学与医疗保健领域的伦理问题:关于大语言模型(LLMs)的系统综述
NPJ Digit Med. 2024 Jul 8;7(1):183. doi: 10.1038/s41746-024-01157-x.
5
Artificial Intelligence in the Provision of Health Care: An American College of Physicians Policy Position Paper.人工智能在医疗保健中的应用:美国医师学院政策立场文件。
Ann Intern Med. 2024 Jul;177(7):964-967. doi: 10.7326/M24-0146. Epub 2024 Jun 4.
6
Understanding and Mitigating Bias in Imaging Artificial Intelligence.理解和减轻成像人工智能中的偏见。
Radiographics. 2024 May;44(5):e230067. doi: 10.1148/rg.230067.
7
Proceedings From the 2022 ACR-RSNA Workshop on Safety, Effectiveness, Reliability, and Transparency in AI.2022 年 ACR-RSNA 人工智能安全性、有效性、可靠性和透明度研讨会论文集
J Am Coll Radiol. 2024 Jul;21(7):1119-1129. doi: 10.1016/j.jacr.2024.01.024. Epub 2024 Feb 13.
8
Developing, Purchasing, Implementing and Monitoring AI Tools in Radiology: Practical Considerations. A Multi-Society Statement From the ACR, CAR, ESR, RANZCR & RSNA.在放射学中开发、购买、实施和监测人工智能工具:实际考虑因素。来自 ACR、CAR、ESR、RANZCR 和 RSNA 的多学会声明。
J Am Coll Radiol. 2024 Aug;21(8):1292-1310. doi: 10.1016/j.jacr.2023.12.005. Epub 2024 Jan 23.
9
Assessing the potential of GPT-4 to perpetuate racial and gender biases in health care: a model evaluation study.评估 GPT-4 在医疗保健中延续种族和性别偏见的潜力:一项模型评估研究。
Lancet Digit Health. 2024 Jan;6(1):e12-e22. doi: 10.1016/S2589-7500(23)00225-X.
10
Considerations for addressing bias in artificial intelligence for health equity.解决人工智能中影响健康公平性的偏差的考量因素。
NPJ Digit Med. 2023 Sep 12;6(1):170. doi: 10.1038/s41746-023-00913-9.

大语言模型时代的健康公平性。

Health equity in the era of large language models.

作者信息

Tierney Aaron A, Reed Mary E, Grant Richard W, Doo Florence X, Payán Denise D, Liu Vincent X

机构信息

Kaiser Permanente Northern California Division of Research, 4480 Hacienda Dr, Pleasanton, CA 94588. Email:

出版信息

Am J Manag Care. 2025 Mar;31(3):112-117. doi: 10.37765/ajmc.2025.89695.

DOI:10.37765/ajmc.2025.89695
PMID:40053403
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC12085167/
Abstract

This commentary presents a summary of 8 major regulations and guidelines that have direct implications for the equitable design, implementation, and maintenance of health care-focused large language models (LLMs) deployed in the US. We grouped key equity issues for LLMs into 3 domains: (1) linguistic and cultural bias, (2) accessibility and trust, and (3) oversight and quality control. Solutions shared by these regulations and guidelines are to (1) ensure diverse representation in training data and in teams that develop artificial intelligence (AI) tools, (2) develop techniques to evaluate AI-enabled health care tool performance against real-world data, (3) ensure that AI used in health care is free of discrimination and integrates equity principles, (4) take meaningful steps to ensure access for patients with limited English proficiency, (5) apply AI tools to make workplaces more efficient and reduce administrative burdens, (6) require human oversight of AI tools used in health care delivery, and (7) ensure AI tools are safe, accessible, and beneficial while respecting privacy. There is an opportunity to prevent further embedding of existing disparities and issues in the health care system by enhancing health equity through thoughtfully designed and deployed LLMs.

摘要

本评论总结了8项主要法规和指南,这些法规和指南对在美国部署的以医疗保健为重点的大语言模型(LLM)的公平设计、实施和维护具有直接影响。我们将LLM的关键公平问题分为3个领域:(1)语言和文化偏见,(2)可及性和信任,以及(3)监督和质量控制。这些法规和指南共享的解决方案包括:(1)确保训练数据和开发人工智能(AI)工具的团队具有多样化代表性;(2)开发根据真实世界数据评估人工智能支持的医疗保健工具性能的技术;(3)确保医疗保健中使用的人工智能不存在歧视并融入公平原则;(4)采取有意义的措施确保英语水平有限的患者能够使用;(5)应用人工智能工具提高工作场所效率并减轻行政负担;(6)要求对医疗保健服务中使用的人工智能工具进行人工监督;(7)确保人工智能工具安全、可及且有益,同时尊重隐私。通过精心设计和部署LLM来促进健康公平,有机会防止医疗保健系统中现有差距和问题的进一步固化。