• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

在医疗保健领域采用负责任的人工智能时,多样性和公平性的必要性。

The imperative of diversity and equity for the adoption of responsible AI in healthcare.

作者信息

Hilling Denise E, Ihaddouchen Imane, Buijsman Stefan, Townsend Reggie, Gommers Diederik, van Genderen Michel E

机构信息

Department of Gastrointestinal Surgery and Surgical Oncology, Erasmus MC Cancer Institute, University Medical Center, Rotterdam, Netherlands.

Erasmus MC Datahub, University Medical Center, Rotterdam, Netherlands.

出版信息

Front Artif Intell. 2025 Apr 16;8:1577529. doi: 10.3389/frai.2025.1577529. eCollection 2025.

DOI:10.3389/frai.2025.1577529
PMID:40309720
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC12040885/
Abstract

Artificial Intelligence (AI) in healthcare holds transformative potential but faces critical challenges in ethical accountability and systemic inequities. Biases in AI models, such as lower diagnosis rates for Black women or gender stereotyping in Large Language Models, highlight the urgent need to address historical and structural inequalities in data and development processes. Disparities in clinical trials and datasets, often skewed toward high-income, English-speaking regions, amplify these issues. Moreover, the underrepresentation of marginalized groups among AI developers and researchers exacerbates these challenges. To ensure equitable AI, diverse data collection, federated data-sharing frameworks, and bias-correction techniques are essential. Structural initiatives, such as fairness audits, transparent AI model development processes, and early registration of clinical AI models, alongside inclusive global collaborations like TRAIN-Europe and CHAI, can drive responsible AI adoption. Prioritizing diversity in datasets and among developers and researchers, as well as implementing transparent governance will foster AI systems that uphold ethical principles and deliver equitable healthcare outcomes globally.

摘要

医疗保健领域的人工智能具有变革潜力,但在道德责任和系统性不平等方面面临严峻挑战。人工智能模型中的偏见,如黑人女性的诊断率较低或大语言模型中的性别刻板印象,凸显了迫切需要解决数据和开发过程中的历史和结构性不平等问题。临床试验和数据集的差异往往偏向高收入、讲英语的地区,这加剧了这些问题。此外,边缘化群体在人工智能开发者和研究人员中的代表性不足也加剧了这些挑战。为确保人工智能的公平性,多样化的数据收集、联邦数据共享框架和偏差校正技术至关重要。结构性举措,如公平性审计、透明的人工智能模型开发过程以及临床人工智能模型的早期注册,以及像欧洲培训与研究网络(TRAIN-Europe)和临床人工智能倡议(CHAI)这样的包容性全球合作,可以推动负责任地采用人工智能。优先考虑数据集中以及开发者和研究人员之间的多样性,并实施透明治理,将培育出秉持道德原则并在全球提供公平医疗保健成果的人工智能系统。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f7fb/12040885/80041b677c17/frai-08-1577529-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f7fb/12040885/80041b677c17/frai-08-1577529-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f7fb/12040885/80041b677c17/frai-08-1577529-g001.jpg

相似文献

1
The imperative of diversity and equity for the adoption of responsible AI in healthcare.在医疗保健领域采用负责任的人工智能时,多样性和公平性的必要性。
Front Artif Intell. 2025 Apr 16;8:1577529. doi: 10.3389/frai.2025.1577529. eCollection 2025.
2
Unraveling the Ethical Enigma: Artificial Intelligence in Healthcare.解开伦理谜团:医疗保健领域的人工智能
Cureus. 2023 Aug 10;15(8):e43262. doi: 10.7759/cureus.43262. eCollection 2023 Aug.
3
Artificial intelligence to revolutionize IBD clinical trials: a comprehensive review.人工智能将彻底改变炎症性肠病临床试验:全面综述。
Therap Adv Gastroenterol. 2025 Feb 23;18:17562848251321915. doi: 10.1177/17562848251321915. eCollection 2025.
4
Data stewardship and curation practices in AI-based genomics and automated microscopy image analysis for high-throughput screening studies: promoting robust and ethical AI applications.基于人工智能的基因组学和用于高通量筛选研究的自动显微镜图像分析中的数据管理与整理实践:推动可靠且符合伦理的人工智能应用。
Hum Genomics. 2025 Feb 23;19(1):16. doi: 10.1186/s40246-025-00716-x.
5
Fairness of artificial intelligence in healthcare: review and recommendations.人工智能在医疗保健中的公平性:综述与建议。
Jpn J Radiol. 2024 Jan;42(1):3-15. doi: 10.1007/s11604-023-01474-3. Epub 2023 Aug 4.
6
Empowering nurses to champion Health equity & BE FAIR: Bias elimination for fair and responsible AI in healthcare.赋予护士权力,倡导健康公平并做到公平公正:消除医疗保健中人工智能的偏见,实现公平且负责任的人工智能。
J Nurs Scholarsh. 2025 Jan;57(1):130-139. doi: 10.1111/jnu.13007. Epub 2024 Jul 29.
7
Ethical challenges and evolving strategies in the integration of artificial intelligence into clinical practice.将人工智能整合到临床实践中的伦理挑战与不断发展的策略。
PLOS Digit Health. 2025 Apr 8;4(4):e0000810. doi: 10.1371/journal.pdig.0000810. eCollection 2025 Apr.
8
AI for all: bridging data gaps in machine learning and health.全民人工智能:弥合机器学习与健康领域的数据差距。
Transl Behav Med. 2025 Jan 16;15(1). doi: 10.1093/tbm/ibae075.
9
Responsible artificial intelligence for addressing equity in oral healthcare.负责任的人工智能助力口腔医疗保健公平性问题的解决。
Front Oral Health. 2024 Jul 18;5:1408867. doi: 10.3389/froh.2024.1408867. eCollection 2024.
10
Artificial Intelligence to Promote Racial and Ethnic Cardiovascular Health Equity.人工智能促进种族和族裔心血管健康公平。
Curr Cardiovasc Risk Rep. 2024 Nov;18(11):153-162. doi: 10.1007/s12170-024-00745-6. Epub 2024 Aug 20.

引用本文的文献

1
Large language models in clinical nutrition: an overview of its applications, capabilities, limitations, and potential future prospects.临床营养中的大语言模型:其应用、能力、局限性及潜在未来前景概述
Front Nutr. 2025 Aug 7;12:1635682. doi: 10.3389/fnut.2025.1635682. eCollection 2025.

本文引用的文献

1
Gender Representation of Health Care Professionals in Large Language Model-Generated Stories.大型语言模型生成故事中的医疗保健专业人员的性别代表性。
JAMA Netw Open. 2024 Sep 3;7(9):e2434997. doi: 10.1001/jamanetworkopen.2024.34997.
2
The Need for Continuous Evaluation of Artificial Intelligence Prediction Algorithms.对人工智能预测算法进行持续评估的必要性。
JAMA Netw Open. 2024 Sep 3;7(9):e2433009. doi: 10.1001/jamanetworkopen.2024.33009.
3
Federated data access and federated learning: improved data sharing, AI model development, and learning in intensive care.
联邦数据访问与联邦学习:强化重症监护中的数据共享、人工智能模型开发及学习
Intensive Care Med. 2024 Jun;50(6):974-977. doi: 10.1007/s00134-024-07408-5. Epub 2024 Apr 18.
4
A Nationwide Network of Health AI Assurance Laboratories.全国性的健康人工智能保障实验室网络。
JAMA. 2024 Jan 16;331(3):245-249. doi: 10.1001/jama.2023.26930.
5
Tackling bias in AI health datasets through the STANDING Together initiative.通过“携手共进”倡议应对人工智能健康数据集中的偏差问题。
Nat Med. 2022 Nov;28(11):2232-2233. doi: 10.1038/s41591-022-01987-w.
6
Comparison of Methods to Reduce Bias From Clinical Prediction Models of Postpartum Depression.比较降低产后抑郁临床预测模型偏倚的方法。
JAMA Netw Open. 2021 Apr 1;4(4):e213909. doi: 10.1001/jamanetworkopen.2021.3909.