• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

生物医学中解决人工智能公平性和偏见问题的最新方法综述。

A survey of recent methods for addressing AI fairness and bias in biomedicine.

机构信息

National Center for Biotechnology Information (NCBI), National Library of Medicine (NLM), National Institutes of Health (NIH), Bethesda, MD, USA; Department of Computer Science, University of Maryland, College Park, USA.

Department of Population Health Sciences, Weill Cornell Medicine, NY, USA.

出版信息

J Biomed Inform. 2024 Jun;154:104646. doi: 10.1016/j.jbi.2024.104646. Epub 2024 Apr 25.

DOI:10.1016/j.jbi.2024.104646
PMID:38677633
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC11129918/
Abstract

OBJECTIVES

Artificial intelligence (AI) systems have the potential to revolutionize clinical practices, including improving diagnostic accuracy and surgical decision-making, while also reducing costs and manpower. However, it is important to recognize that these systems may perpetuate social inequities or demonstrate biases, such as those based on race or gender. Such biases can occur before, during, or after the development of AI models, making it critical to understand and address potential biases to enable the accurate and reliable application of AI models in clinical settings. To mitigate bias concerns during model development, we surveyed recent publications on different debiasing methods in the fields of biomedical natural language processing (NLP) or computer vision (CV). Then we discussed the methods, such as data perturbation and adversarial learning, that have been applied in the biomedical domain to address bias.

METHODS

We performed our literature search on PubMed, ACM digital library, and IEEE Xplore of relevant articles published between January 2018 and December 2023 using multiple combinations of keywords. We then filtered the result of 10,041 articles automatically with loose constraints, and manually inspected the abstracts of the remaining 890 articles to identify the 55 articles included in this review. Additional articles in the references are also included in this review. We discuss each method and compare its strengths and weaknesses. Finally, we review other potential methods from the general domain that could be applied to biomedicine to address bias and improve fairness.

RESULTS

The bias of AIs in biomedicine can originate from multiple sources such as insufficient data, sampling bias and the use of health-irrelevant features or race-adjusted algorithms. Existing debiasing methods that focus on algorithms can be categorized into distributional or algorithmic. Distributional methods include data augmentation, data perturbation, data reweighting methods, and federated learning. Algorithmic approaches include unsupervised representation learning, adversarial learning, disentangled representation learning, loss-based methods and causality-based methods.

摘要

目的

人工智能(AI)系统有可能彻底改变临床实践,包括提高诊断准确性和手术决策能力,同时降低成本和人力。然而,重要的是要认识到这些系统可能会延续社会不平等或表现出偏见,例如基于种族或性别。这些偏见可能在 AI 模型开发之前、期间或之后发生,因此理解和解决潜在偏见对于在临床环境中准确可靠地应用 AI 模型至关重要。为了减轻模型开发过程中的偏见问题,我们调查了生物医学自然语言处理(NLP)或计算机视觉(CV)领域最近关于不同去偏方法的出版物。然后,我们讨论了已在生物医学领域应用的方法,例如数据扰动和对抗学习,以解决偏见问题。

方法

我们使用多个关键词组合在 PubMed、ACM 数字图书馆和 IEEE Xplore 上进行文献检索,检索了 2018 年 1 月至 2023 年 12 月期间发表的相关文章。然后,我们使用宽松的约束自动过滤了 10041 篇文章的结果,手动检查了其余 890 篇文章的摘要,以确定本综述中包含的 55 篇文章。参考文献中的其他文章也包含在本综述中。我们讨论了每种方法,并比较了其优缺点。最后,我们综述了来自一般领域的其他潜在方法,这些方法可应用于生物医学领域以解决偏见问题并提高公平性。

结果

生物医学中 AI 的偏见可能源于多个来源,例如数据不足、抽样偏差以及使用与健康无关的特征或经过种族调整的算法。侧重于算法的现有去偏方法可分为分布方法和算法方法。分布方法包括数据增强、数据扰动、数据重新加权方法和联邦学习。算法方法包括无监督表示学习、对抗学习、解缠表示学习、基于损失的方法和基于因果关系的方法。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ffc8/11129918/549c842112e0/nihms-1990249-f0003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ffc8/11129918/2dff771c6fbb/nihms-1990249-f0002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ffc8/11129918/549c842112e0/nihms-1990249-f0003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ffc8/11129918/2dff771c6fbb/nihms-1990249-f0002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ffc8/11129918/549c842112e0/nihms-1990249-f0003.jpg

相似文献

1
A survey of recent methods for addressing AI fairness and bias in biomedicine.生物医学中解决人工智能公平性和偏见问题的最新方法综述。
J Biomed Inform. 2024 Jun;154:104646. doi: 10.1016/j.jbi.2024.104646. Epub 2024 Apr 25.
2
A survey of recent methods for addressing AI fairness and bias in biomedicine.近期解决生物医学中人工智能公平性和偏差问题的方法综述。
ArXiv. 2024 Feb 13:arXiv:2402.08250v1.
3
A roadmap to artificial intelligence (AI): Methods for designing and building AI ready data to promote fairness.人工智能(AI)路线图:设计和构建 AI 就绪数据的方法,以促进公平性。
J Biomed Inform. 2024 Jun;154:104654. doi: 10.1016/j.jbi.2024.104654. Epub 2024 May 11.
4
Bias Mitigation in Primary Health Care Artificial Intelligence Models: Scoping Review.初级卫生保健人工智能模型中的偏差缓解:范围综述
J Med Internet Res. 2025 Jan 7;27:e60269. doi: 10.2196/60269.
5
Call for algorithmic fairness to mitigate amplification of racial biases in artificial intelligence models used in orthodontics and craniofacial health.呼吁算法公平性以减轻在口腔正畸学和颅面健康中使用的人工智能模型中种族偏见的放大。
Orthod Craniofac Res. 2023 Dec;26 Suppl 1:124-130. doi: 10.1111/ocr.12721. Epub 2023 Oct 17.
6
Unmasking bias in artificial intelligence: a systematic review of bias detection and mitigation strategies in electronic health record-based models.揭开人工智能中的偏见:基于电子健康记录模型的偏见检测和缓解策略的系统评价。
J Am Med Inform Assoc. 2024 Apr 19;31(5):1172-1183. doi: 10.1093/jamia/ocae060.
7
A scoping review of fair machine learning techniques when using real-world data.使用真实世界数据时公平机器学习技术的范围综述。
J Biomed Inform. 2024 Mar;151:104622. doi: 10.1016/j.jbi.2024.104622. Epub 2024 Mar 6.
8
Artificial Intelligence Applications to Measure Food and Nutrient Intakes: Scoping Review.人工智能在测量食物和营养素摄入量中的应用:范围综述。
J Med Internet Res. 2024 Nov 28;26:e54557. doi: 10.2196/54557.
9
Recommendations to promote fairness and inclusion in biomedical AI research and clinical use.促进生物医学人工智能研究和临床应用公平性和包容性的建议。
J Biomed Inform. 2024 Sep;157:104693. doi: 10.1016/j.jbi.2024.104693. Epub 2024 Jul 15.
10
Federated Learning in Glaucoma: A Comprehensive Review and Future Perspectives.青光眼领域的联邦学习:全面综述与未来展望
Ophthalmol Glaucoma. 2025 Jan-Feb;8(1):92-105. doi: 10.1016/j.ogla.2024.08.004. Epub 2024 Aug 29.

引用本文的文献

1
Efficient Detection of Stigmatizing Language in Electronic Health Records via In-Context Learning: Comparative Analysis and Validation Study.通过上下文学习在电子健康记录中高效检测污名化语言:比较分析与验证研究
JMIR Med Inform. 2025 Aug 18;13:e68955. doi: 10.2196/68955.
2
Towards machine learning fairness in classifying multicategory causes of deaths in colorectal or lung cancer patients.迈向结直肠癌或肺癌患者多类别死亡原因分类中的机器学习公平性。
Brief Bioinform. 2025 Jul 2;26(4). doi: 10.1093/bib/bbaf398.
3
Ethical considerations and robustness of artificial neural networks in medical image analysis under data corruption.

本文引用的文献

1
A vision-language foundation model for the generation of realistic chest X-ray images.一种用于生成逼真胸部X光图像的视觉语言基础模型。
Nat Biomed Eng. 2025 Apr;9(4):494-506. doi: 10.1038/s41551-024-01246-y. Epub 2024 Aug 26.
2
Video-Based Deep Learning for Automated Assessment of Left Ventricular Ejection Fraction in Pediatric Patients.基于视频的深度学习在儿科患者左心室射血分数自动评估中的应用。
J Am Soc Echocardiogr. 2023 May;36(5):482-489. doi: 10.1016/j.echo.2023.01.015. Epub 2023 Feb 7.
3
Equitable precision medicine for type 2 diabetes.
数据损坏情况下医学图像分析中人工神经网络的伦理考量与稳健性
Sci Rep. 2025 Aug 11;15(1):29305. doi: 10.1038/s41598-025-15268-2.
4
Evaluating Vision and Pathology Foundation Models for Computational Pathology: A Comprehensive Benchmark Study.评估用于计算病理学的视觉与病理学基础模型:一项全面的基准研究
Res Sq. 2025 Jul 4:rs.3.rs-6823810. doi: 10.21203/rs.3.rs-6823810/v1.
5
Framework for bias evaluation in large language models in healthcare settings.医疗环境中大型语言模型偏差评估框架。
NPJ Digit Med. 2025 Jul 7;8(1):414. doi: 10.1038/s41746-025-01786-w.
6
Multimodal AI in Biomedicine: Pioneering the Future of Biomaterials, Diagnostics, and Personalized Healthcare.生物医学中的多模态人工智能:开创生物材料、诊断和个性化医疗的未来。
Nanomaterials (Basel). 2025 Jun 10;15(12):895. doi: 10.3390/nano15120895.
7
Uncovering ethical biases in publicly available fetal ultrasound datasets.揭示公开可用的胎儿超声数据集中的伦理偏见。
NPJ Digit Med. 2025 Jun 13;8(1):355. doi: 10.1038/s41746-025-01739-3.
8
Predicting anorexia nervosa treatment efficacy: an explainable machine learning approach.预测神经性厌食症的治疗效果:一种可解释的机器学习方法。
J Eat Disord. 2025 Jun 2;13(1):97. doi: 10.1186/s40337-025-01265-3.
9
Mitigating Bias in Machine Learning Models with Ethics-Based Initiatives: The Case of Sepsis.通过基于伦理的举措减轻机器学习模型中的偏差:脓毒症案例
Am J Bioeth. 2025 May 12:1-14. doi: 10.1080/15265161.2025.2497971.
10
Towards machine learning fairness in classifying multicategory causes of deaths in colorectal or lung cancer patients.迈向结直肠癌或肺癌患者多类别死因分类中的机器学习公平性
bioRxiv. 2025 Feb 19:2025.02.14.638368. doi: 10.1101/2025.02.14.638368.
2型糖尿病的公平精准医学
Lancet Digit Health. 2022 Dec;4(12):e850. doi: 10.1016/S2589-7500(22)00217-5.
4
Bias reduction in representation of histopathology images using deep feature selection.使用深度特征选择减少组织病理学图像表示中的偏差。
Sci Rep. 2022 Nov 21;12(1):19994. doi: 10.1038/s41598-022-24317-z.
5
Synthetic Medical Images for Robust, Privacy-Preserving Training of Artificial Intelligence: Application to Retinopathy of Prematurity Diagnosis.用于人工智能稳健、隐私保护训练的合成医学图像:在早产儿视网膜病变诊断中的应用
Ophthalmol Sci. 2022 Feb 11;2(2):100126. doi: 10.1016/j.xops.2022.100126. eCollection 2022 Jun.
6
Deepfakes in Ophthalmology: Applications and Realism of Synthetic Retinal Images from Generative Adversarial Networks.眼科中的深度伪造技术:生成对抗网络合成视网膜图像的应用与逼真度
Ophthalmol Sci. 2021 Nov 16;1(4):100079. doi: 10.1016/j.xops.2021.100079. eCollection 2021 Dec.
7
Cardiac aging synthesis from cross-sectional data with conditional generative adversarial networks.使用条件生成对抗网络从横断面数据进行心脏衰老合成。
Front Cardiovasc Med. 2022 Sep 23;9:983091. doi: 10.3389/fcvm.2022.983091. eCollection 2022.
8
Fair and Privacy-Preserving Alzheimer's Disease Diagnosis Based on Spontaneous Speech Analysis via Federated Learning.基于联邦学习的自发语音分析的公平且保护隐私的阿尔茨海默病诊断。
Annu Int Conf IEEE Eng Med Biol Soc. 2022 Jul;2022:1362-1365. doi: 10.1109/EMBC48229.2022.9871204.
9
Algorithmic fairness in computational medicine.计算医学中的算法公平性。
EBioMedicine. 2022 Oct;84:104250. doi: 10.1016/j.ebiom.2022.104250. Epub 2022 Sep 6.
10
Subpopulation-specific machine learning prognosis for underrepresented patients with double prioritized bias correction.针对代表性不足患者的亚群特异性机器学习预后分析及双重优先偏差校正
Commun Med (Lond). 2022 Sep 1;2:111. doi: 10.1038/s43856-022-00165-w. eCollection 2022.