• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

人工智能健康公平性的参与者流程图。

Participant flow diagrams for health equity in AI.

机构信息

Harvard Medical School, Boston, MA, USA.

Laboratory for Computational Physiology, Massachusetts Institute of Technology, Cambridge, MA, USA; Faculty of Engineering, University of Porto, Porto, Portugal; Institute for Systems and Computer Engineering, Technology and Science (INESCTEC), Porto, Portugal.

出版信息

J Biomed Inform. 2024 Apr;152:104631. doi: 10.1016/j.jbi.2024.104631. Epub 2024 Mar 27.

DOI:10.1016/j.jbi.2024.104631
PMID:38548006
Abstract

Selection bias can arise through many aspects of a study, including recruitment, inclusion/exclusion criteria, input-level exclusion and outcome-level exclusion, and often reflects the underrepresentation of populations historically disadvantaged in medical research. The effects of selection bias can be further amplified when non-representative samples are used in artificial intelligence (AI) and machine learning (ML) applications to construct clinical algorithms. Building on the "Data Cards" initiative for transparency in AI research, we advocate for the addition of a participant flow diagram for AI studies detailing relevant sociodemographic and/or clinical characteristics of excluded participants across study phases, with the goal of identifying potential algorithmic biases before their clinical implementation. We include both a model for this flow diagram as well as a brief case study explaining how it could be implemented in practice. Through standardized reporting of participant flow diagrams, we aim to better identify potential inequities embedded in AI applications, facilitating more reliable and equitable clinical algorithms.

摘要

选择偏倚可能出现在研究的多个方面,包括招募、纳入/排除标准、输入级排除和结果级排除,并且通常反映了在医学研究中历史上处于不利地位的人群代表性不足的问题。当在人工智能 (AI) 和机器学习 (ML) 应用中使用非代表性样本构建临床算法时,选择偏倚的影响会进一步放大。在 AI 研究的“Data Cards”倡议的基础上,我们主张在 AI 研究中添加参与者流程图,详细说明研究各个阶段排除参与者的相关社会人口统计学和/或临床特征,目的是在临床实施之前确定潜在的算法偏差。我们包括此流程图的模型以及一个简短的案例研究,解释如何在实践中实现它。通过标准化报告参与者流程图,我们旨在更好地识别 AI 应用中潜在的不公平现象,从而促进更可靠和公平的临床算法。

相似文献

1
Participant flow diagrams for health equity in AI.人工智能健康公平性的参与者流程图。
J Biomed Inform. 2024 Apr;152:104631. doi: 10.1016/j.jbi.2024.104631. Epub 2024 Mar 27.
2
Folic acid supplementation and malaria susceptibility and severity among people taking antifolate antimalarial drugs in endemic areas.在流行地区,服用抗叶酸抗疟药物的人群中,叶酸补充剂与疟疾易感性和严重程度的关系。
Cochrane Database Syst Rev. 2022 Feb 1;2(2022):CD014217. doi: 10.1002/14651858.CD014217.
3
Recommendations to promote fairness and inclusion in biomedical AI research and clinical use.促进生物医学人工智能研究和临床应用公平性和包容性的建议。
J Biomed Inform. 2024 Sep;157:104693. doi: 10.1016/j.jbi.2024.104693. Epub 2024 Jul 15.
4
Call for algorithmic fairness to mitigate amplification of racial biases in artificial intelligence models used in orthodontics and craniofacial health.呼吁算法公平性以减轻在口腔正畸学和颅面健康中使用的人工智能模型中种族偏见的放大。
Orthod Craniofac Res. 2023 Dec;26 Suppl 1:124-130. doi: 10.1111/ocr.12721. Epub 2023 Oct 17.
5
A roadmap to artificial intelligence (AI): Methods for designing and building AI ready data to promote fairness.人工智能(AI)路线图:设计和构建 AI 就绪数据的方法,以促进公平性。
J Biomed Inform. 2024 Jun;154:104654. doi: 10.1016/j.jbi.2024.104654. Epub 2024 May 11.
6
Towards gender equity in artificial intelligence and machine learning applications in dermatology.推动皮肤科人工智能和机器学习应用中的性别平等。
J Am Med Inform Assoc. 2022 Jan 12;29(2):400-403. doi: 10.1093/jamia/ocab113.
7
A Conference (Missingness in Action) to Address Missingness in Data and AI in Health Care: Qualitative Thematic Analysis.会议(行动中的缺失)旨在解决医疗保健数据和人工智能中的缺失问题:定性主题分析。
J Med Internet Res. 2023 Nov 23;25:e49314. doi: 10.2196/49314.
8
Developing Ethics and Equity Principles, Terms, and Engagement Tools to Advance Health Equity and Researcher Diversity in AI and Machine Learning: Modified Delphi Approach.制定伦理与公平原则、术语及参与工具,以促进人工智能和机器学习领域的健康公平及研究人员多样性:改良德尔菲法
JMIR AI. 2023 Dec 6;2:e52888. doi: 10.2196/52888.
9
Artificial intelligence in gastroenterology and hepatology: how to advance clinical practice while ensuring health equity.人工智能在胃肠病学和肝脏病学中的应用:在确保卫生公平的同时如何推进临床实践。
Gut. 2022 Sep;71(9):1909-1915. doi: 10.1136/gutjnl-2021-326271. Epub 2022 Jun 10.
10
Human-Centered Design to Address Biases in Artificial Intelligence.以人为中心的设计来解决人工智能中的偏见。
J Med Internet Res. 2023 Mar 24;25:e43251. doi: 10.2196/43251.

引用本文的文献

1
Improving the reporting on health equity in observational research (STROBE-Equity): extension checklist and elaboration.改进观察性研究中健康公平性的报告(STROBE-公平性):扩展清单及阐述
BMJ. 2025 Sep 3;390:e083882. doi: 10.1136/bmj-2024-083882.
2
Potential source of bias in AI models: lactate measurement in the ICU in sepsis patients as a template.人工智能模型中潜在的偏差来源:以脓毒症患者在重症监护病房中的乳酸测量为例。
Front Med (Lausanne). 2025 Jul 9;12:1606254. doi: 10.3389/fmed.2025.1606254. eCollection 2025.
3
A practical guide for nephrologist peer reviewers: evaluating artificial intelligence and machine learning research in nephrology.
肾病学家同行评审员实用指南:评估肾脏病学中的人工智能和机器学习研究。
Ren Fail. 2025 Dec;47(1):2513002. doi: 10.1080/0886022X.2025.2513002. Epub 2025 Jul 7.
4
Differences in Arterial Blood Gas Testing by Race and Sex across 161 U.S. Hospitals in Four Electronic Health Record Databases.四个电子健康记录数据库中161家美国医院按种族和性别划分的动脉血气检测差异。
Am J Respir Crit Care Med. 2025 Jun;211(6):1049-1058. doi: 10.1164/rccm.202406-1242OC.
5
An open-source framework for end-to-end analysis of electronic health record data.一个用于电子健康记录数据端到端分析的开源框架。
Nat Med. 2024 Nov;30(11):3369-3380. doi: 10.1038/s41591-024-03214-0. Epub 2024 Sep 12.