• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

相似文献

1
Mitigating Racial And Ethnic Bias And Advancing Health Equity In Clinical Algorithms: A Scoping Review.减轻临床算法中的种族和民族偏见并促进健康公平:范围综述。
Health Aff (Millwood). 2023 Oct;42(10):1359-1368. doi: 10.1377/hlthaff.2023.00553.
2
Guiding Principles to Address the Impact of Algorithm Bias on Racial and Ethnic Disparities in Health and Health Care.解决算法偏差对健康和医疗保健中种族和民族差异影响的指导原则。
JAMA Netw Open. 2023 Dec 1;6(12):e2345050. doi: 10.1001/jamanetworkopen.2023.45050.
3
Bias Mitigation in Primary Health Care Artificial Intelligence Models: Scoping Review.初级卫生保健人工智能模型中的偏差缓解:范围综述
J Med Internet Res. 2025 Jan 7;27:e60269. doi: 10.2196/60269.
4
Folic acid supplementation and malaria susceptibility and severity among people taking antifolate antimalarial drugs in endemic areas.在流行地区,服用抗叶酸抗疟药物的人群中,叶酸补充剂与疟疾易感性和严重程度的关系。
Cochrane Database Syst Rev. 2022 Feb 1;2(2022):CD014217. doi: 10.1002/14651858.CD014217.
5
The Impact of Artificial Intelligence on Health Equity in Oncology: Scoping Review.人工智能对肿瘤学中健康公平性的影响:范围综述。
J Med Internet Res. 2022 Nov 1;24(11):e39748. doi: 10.2196/39748.
6
The bias algorithm: how AI in healthcare exacerbates ethnic and racial disparities - a scoping review.偏差算法:医疗保健领域的人工智能如何加剧种族和民族差异——一项范围综述。
Ethn Health. 2025 Feb;30(2):197-214. doi: 10.1080/13557858.2024.2422848. Epub 2024 Nov 3.
7
The Impact of Health Care Algorithms on Racial and Ethnic Disparities : A Systematic Review.医疗算法对种族和民族差异的影响:系统评价。
Ann Intern Med. 2024 Apr;177(4):484-496. doi: 10.7326/M23-2960. Epub 2024 Mar 12.
8
Definitions, terminology, and related concepts of "racial health equity": a scoping review protocol.“种族健康公平”的定义、术语和相关概念:范围综述方案。
Syst Rev. 2023 Sep 30;12(1):185. doi: 10.1186/s13643-023-02357-4.
9
Awareness of Racial and Ethnic Bias and Potential Solutions to Address Bias With Use of Health Care Algorithms.种族和民族偏见意识,以及使用医疗保健算法解决偏见的潜在方法。
JAMA Health Forum. 2023 Jun 2;4(6):e231197. doi: 10.1001/jamahealthforum.2023.1197.
10
Empowering nurses to champion Health equity & BE FAIR: Bias elimination for fair and responsible AI in healthcare.赋予护士权力,倡导健康公平并做到公平公正:消除医疗保健中人工智能的偏见,实现公平且负责任的人工智能。
J Nurs Scholarsh. 2025 Jan;57(1):130-139. doi: 10.1111/jnu.13007. Epub 2024 Jul 29.

引用本文的文献

1
Role and Use of Race in Artificial Intelligence and Machine Learning Models Related to Health.种族在与健康相关的人工智能和机器学习模型中的作用及应用
J Med Internet Res. 2025 Jul 31;27:e73996. doi: 10.2196/73996.
2
Predictive models for low birth weight: a comparative analysis of algorithmic fairness-improving approaches.低出生体重预测模型:算法公平性改进方法的比较分析
Am J Manag Care. 2025 May 1;31(5):e132-e137. doi: 10.37765/ajmc.2025.89737.
3
Building competency in artificial intelligence and bias mitigation for nurse scientists and aligned health researchers.培养护士科学家和相关健康研究人员在人工智能及减轻偏差方面的能力。
Nurs Outlook. 2025 May-Jun;73(3):102395. doi: 10.1016/j.outlook.2025.102395. Epub 2025 May 2.
4
Developing and Applying the BE-FAIR Equity Framework to a Population Health Predictive Model: A Retrospective Observational Cohort Study.将BE-FAIR公平性框架开发并应用于人群健康预测模型:一项回顾性观察队列研究。
J Gen Intern Med. 2025 Aug;40(11):2537-2547. doi: 10.1007/s11606-025-09462-1. Epub 2025 Mar 14.
5
Fairness in Low Birthweight Predictive Models: Implications of Excluding Race/Ethnicity.低出生体重预测模型中的公平性:排除种族/族裔的影响
J Racial Ethn Health Disparities. 2025 Jan 29. doi: 10.1007/s40615-025-02296-x.
6
Artificial Intelligence in Nursing: Technological Benefits to Nurse's Mental Health and Patient Care Quality.护理中的人工智能:对护士心理健康和患者护理质量的技术益处。
Healthcare (Basel). 2024 Dec 18;12(24):2555. doi: 10.3390/healthcare12242555.
7
Embracing Inclusion, Diversity, Equity and Access (IDEA): Cultivating understanding internally to foster external change.拥抱包容、多元、公平与机会(IDEA):在内部培养理解以促进外部变革。
J Spinal Cord Med. 2025 Mar;48(2):161-169. doi: 10.1080/10790268.2024.2426312. Epub 2024 Nov 22.
8
Digital Health Readiness: Making Digital Health Care More Inclusive.数字健康准备:让数字医疗更具包容性。
JMIR Mhealth Uhealth. 2024 Oct 9;12:e58035. doi: 10.2196/58035.
9
A scoping review of reporting gaps in FDA-approved AI medical devices.对美国食品药品监督管理局(FDA)批准的人工智能医疗设备报告漏洞的范围审查。
NPJ Digit Med. 2024 Oct 3;7(1):273. doi: 10.1038/s41746-024-01270-x.
10
A toolbox for surfacing health equity harms and biases in large language models.一个用于揭示大语言模型中健康公平性危害和偏见的工具箱。
Nat Med. 2024 Dec;30(12):3590-3600. doi: 10.1038/s41591-024-03258-2. Epub 2024 Sep 23.

本文引用的文献

1
Subpopulation-specific machine learning prognosis for underrepresented patients with double prioritized bias correction.针对代表性不足患者的亚群特异性机器学习预后分析及双重优先偏差校正
Commun Med (Lond). 2022 Sep 1;2:111. doi: 10.1038/s43856-022-00165-w. eCollection 2022.
2
A framework for digital health equity.数字健康公平框架。
NPJ Digit Med. 2022 Aug 18;5(1):119. doi: 10.1038/s41746-022-00663-0.
3
Machine Learning-Based Models Incorporating Social Determinants of Health vs Traditional Models for Predicting In-Hospital Mortality in Patients With Heart Failure.基于机器学习的纳入健康社会决定因素的模型与传统模型在预测心力衰竭患者住院死亡率中的比较。
JAMA Cardiol. 2022 Aug 1;7(8):844-854. doi: 10.1001/jamacardio.2022.1900.
4
Artificial intelligence in gastroenterology and hepatology: how to advance clinical practice while ensuring health equity.人工智能在胃肠病学和肝脏病学中的应用:在确保卫生公平的同时如何推进临床实践。
Gut. 2022 Sep;71(9):1909-1915. doi: 10.1136/gutjnl-2021-326271. Epub 2022 Jun 10.
5
A scoping review of ethics considerations in clinical natural language processing.临床自然语言处理中伦理考量的范围综述。
JAMIA Open. 2022 May 26;5(2):ooac039. doi: 10.1093/jamiaopen/ooac039. eCollection 2022 Jul.
6
Evaluation and Mitigation of Racial Bias in Clinical Machine Learning Models: Scoping Review.临床机器学习模型中种族偏见的评估与缓解:范围综述
JMIR Med Inform. 2022 May 31;10(5):e36388. doi: 10.2196/36388.
7
Racial and Ethnic Disparities Among Women Undergoing a Trial of Labor After Cesarean Delivery: Performance of the VBAC Calculator with and without Patients' Race/Ethnicity.剖宫产后试产的女性中存在种族和民族差异:使用和不使用患者种族/民族信息的 VBAC 计算器的表现。
Reprod Sci. 2022 Jul;29(7):2030-2038. doi: 10.1007/s43032-022-00959-2. Epub 2022 May 9.
8
Fairness in Cardiac Magnetic Resonance Imaging: Assessing Sex and Racial Bias in Deep Learning-Based Segmentation.心脏磁共振成像中的公平性:评估基于深度学习分割的性别和种族偏见
Front Cardiovasc Med. 2022 Apr 7;9:859310. doi: 10.3389/fcvm.2022.859310. eCollection 2022.
9
Evaluating algorithmic fairness in the presence of clinical guidelines: the case of atherosclerotic cardiovascular disease risk estimation.评估临床指南存在下的算法公平性:以动脉粥样硬化性心血管疾病风险评估为例。
BMJ Health Care Inform. 2022 Apr;29(1). doi: 10.1136/bmjhci-2021-100460.
10
Resampling to address inequities in predictive modeling of suicide deaths.重新采样以解决自杀死亡预测模型中的不平等问题。
BMJ Health Care Inform. 2022 Apr;29(1). doi: 10.1136/bmjhci-2021-100456.

减轻临床算法中的种族和民族偏见并促进健康公平:范围综述。

Mitigating Racial And Ethnic Bias And Advancing Health Equity In Clinical Algorithms: A Scoping Review.

机构信息

Michael P. Cary Jr. (

Anna Zink, University of Chicago, Chicago, Illinois.

出版信息

Health Aff (Millwood). 2023 Oct;42(10):1359-1368. doi: 10.1377/hlthaff.2023.00553.

DOI:10.1377/hlthaff.2023.00553
PMID:37782868
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC10668606/
Abstract

In August 2022 the Department of Health and Human Services (HHS) issued a notice of proposed rulemaking prohibiting covered entities, which include health care providers and health plans, from discriminating against individuals when using clinical algorithms in decision making. However, HHS did not provide specific guidelines on how covered entities should prevent discrimination. We conducted a scoping review of literature published during the period 2011-22 to identify health care applications, frameworks, reviews and perspectives, and assessment tools that identify and mitigate bias in clinical algorithms, with a specific focus on racial and ethnic bias. Our scoping review encompassed 109 articles comprising 45 empirical health care applications that included tools tested in health care settings, 16 frameworks, and 48 reviews and perspectives. We identified a wide range of technical, operational, and systemwide bias mitigation strategies for clinical algorithms, but there was no consensus in the literature on a single best practice that covered entities could employ to meet the HHS requirements. Future research should identify optimal bias mitigation methods for various scenarios, depending on factors such as patient population, clinical setting, algorithm design, and types of bias to be addressed.

摘要

2022 年 8 月,美国卫生与公众服务部(HHS)发布了一项拟议规则制定通知,禁止涵盖实体(包括医疗保健提供者和医疗计划)在使用临床算法进行决策时歧视个人。然而,HHS 并未提供涵盖实体应如何防止歧视的具体指导方针。我们对 2011-22 年期间发表的文献进行了范围审查,以确定识别和减轻临床算法中偏差的医疗保健应用、框架、评论和观点以及评估工具,特别关注种族和族裔偏差。我们的范围审查包括 109 篇文章,其中包括在医疗保健环境中测试的工具在内的 45 项实证医疗保健应用、16 个框架以及 48 项评论和观点。我们确定了临床算法的各种技术、操作和系统范围的偏差缓解策略,但文献中没有涵盖实体可以采用的单一最佳实践来满足 HHS 的要求的共识。未来的研究应根据患者群体、临床环境、算法设计和要解决的偏差类型等因素,确定各种情况下的最佳偏差缓解方法。