• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

近期解决生物医学中人工智能公平性和偏差问题的方法综述。

A survey of recent methods for addressing AI fairness and bias in biomedicine.

作者信息

Yang Yifan, Lin Mingquan, Zhao Han, Peng Yifan, Huang Furong, Lu Zhiyong

机构信息

National Center for Biotechnology Information (NCBI), National Library of Medicine (NLM), National Institutes of Health (NIH), Bethesda, MD, USA.

Department of Computer Science, University of Maryland, College Park USA.

出版信息

ArXiv. 2024 Feb 13:arXiv:2402.08250v1.

PMID:38529077
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC10962742/
Abstract

OBJECTIVES

Artificial intelligence (AI) systems have the potential to revolutionize clinical practices, including improving diagnostic accuracy and surgical decision-making, while also reducing costs and manpower. However, it is important to recognize that these systems may perpetuate social inequities or demonstrate biases, such as those based on race or gender. Such biases can occur before, during, or after the development of AI models, making it critical to understand and address potential biases to enable the accurate and reliable application of AI models in clinical settings. To mitigate bias concerns during model development, we surveyed recent publications on different debiasing methods in the fields of biomedical natural language processing (NLP) or computer vision (CV). Then we discussed the methods, such as data perturbation and adversarial learning, that have been applied in the biomedical domain to address bias.

METHODS

We performed our literature search on PubMed, ACM digital library, and IEEE Xplore of relevant articles published between January 2018 and December 2023 using multiple combinations of keywords. We then filtered the result of 10,041 articles automatically with loose constraints, and manually inspected the abstracts of the remaining 890 articles to identify the 55 articles included in this review. Additional articles in the references are also included in this review. We discuss each method and compare its strengths and weaknesses. Finally, we review other potential methods from the general domain that could be applied to biomedicine to address bias and improve fairness.

RESULTS

The bias of AIs in biomedicine can originate from multiple sources such as insufficient data, sampling bias and the use of health-irrelevant features or race-adjusted algorithms. Existing debiasing methods that focus on algorithms can be categorized into distributional or algorithmic. Distributional methods include data augmentation, data perturbation, data reweighting methods, and federated learning. Algorithmic approaches include unsupervised representation learning, adversarial learning, disentangled representation learning, loss-based methods and causality-based methods.

摘要

目标

人工智能(AI)系统有潜力变革临床实践,包括提高诊断准确性和手术决策能力,同时降低成本和人力。然而,必须认识到这些系统可能会延续社会不平等或表现出偏见,例如基于种族或性别的偏见。此类偏见可能在AI模型开发之前、期间或之后出现,因此理解并解决潜在偏见对于在临床环境中准确可靠地应用AI模型至关重要。为了在模型开发过程中减轻对偏见的担忧,我们调查了生物医学自然语言处理(NLP)或计算机视觉(CV)领域中有关不同去偏方法的近期出版物。然后我们讨论了已应用于生物医学领域以解决偏见的数据扰动和对抗学习等方法。

方法

我们在PubMed、ACM数字图书馆和IEEE Xplore上使用多种关键词组合对2018年1月至2023年12月发表的相关文章进行文献检索。然后我们使用宽松的约束条件自动筛选出10041篇文章的结果,并人工检查其余890篇文章的摘要,以确定本综述纳入的55篇文章。参考文献中的其他文章也纳入本综述。我们讨论了每种方法并比较了其优缺点。最后,我们回顾了来自通用领域的其他可应用于生物医学以解决偏见和提高公平性的潜在方法。

结果

生物医学中AI的偏见可能源于多个来源,如数据不足、采样偏差以及使用与健康无关的特征或种族调整算法。现有的侧重于算法的去偏方法可分为分布性方法或算法性方法。分布性方法包括数据增强、数据扰动、数据重新加权方法和联邦学习。算法性方法包括无监督表示学习、对抗学习、解缠表示学习、基于损失的方法和基于因果关系的方法。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/51fc/10962742/15f95a3683eb/nihpp-2402.08250v1-f0002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/51fc/10962742/e5a522a0de45/nihpp-2402.08250v1-f0001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/51fc/10962742/15f95a3683eb/nihpp-2402.08250v1-f0002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/51fc/10962742/e5a522a0de45/nihpp-2402.08250v1-f0001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/51fc/10962742/15f95a3683eb/nihpp-2402.08250v1-f0002.jpg

相似文献

1
A survey of recent methods for addressing AI fairness and bias in biomedicine.近期解决生物医学中人工智能公平性和偏差问题的方法综述。
ArXiv. 2024 Feb 13:arXiv:2402.08250v1.
2
A survey of recent methods for addressing AI fairness and bias in biomedicine.生物医学中解决人工智能公平性和偏见问题的最新方法综述。
J Biomed Inform. 2024 Jun;154:104646. doi: 10.1016/j.jbi.2024.104646. Epub 2024 Apr 25.
3
Unmasking bias in artificial intelligence: a systematic review of bias detection and mitigation strategies in electronic health record-based models.揭开人工智能中的偏见:基于电子健康记录模型的偏见检测和缓解策略的系统评价。
J Am Med Inform Assoc. 2024 Apr 19;31(5):1172-1183. doi: 10.1093/jamia/ocae060.
4
Unmasking bias in artificial intelligence: a systematic review of bias detection and mitigation strategies in electronic health record-based models.揭示人工智能中的偏见:基于电子健康记录模型的偏见检测与缓解策略的系统评价
ArXiv. 2024 Jul 1:arXiv:2310.19917v3.
5
Improving Fairness in AI Models on Electronic Health Records: The Case for Federated Learning Methods.提高电子健康记录人工智能模型的公平性:联邦学习方法的案例
FAccT 23 (2023). 2023 Jun;2023:1599-1608. doi: 10.1145/3593013.3594102. Epub 2023 Jun 12.
6
Algorithmic Individual Fairness and Healthcare: A Scoping Review.算法个体公平性与医疗保健:一项范围综述
medRxiv. 2024 Mar 26:2024.03.25.24304853. doi: 10.1101/2024.03.25.24304853.
7
Folic acid supplementation and malaria susceptibility and severity among people taking antifolate antimalarial drugs in endemic areas.在流行地区,服用抗叶酸抗疟药物的人群中,叶酸补充剂与疟疾易感性和严重程度的关系。
Cochrane Database Syst Rev. 2022 Feb 1;2(2022):CD014217. doi: 10.1002/14651858.CD014217.
8
Call for algorithmic fairness to mitigate amplification of racial biases in artificial intelligence models used in orthodontics and craniofacial health.呼吁算法公平性以减轻在口腔正畸学和颅面健康中使用的人工智能模型中种族偏见的放大。
Orthod Craniofac Res. 2023 Dec;26 Suppl 1:124-130. doi: 10.1111/ocr.12721. Epub 2023 Oct 17.
9
Recommendations to promote fairness and inclusion in biomedical AI research and clinical use.促进生物医学人工智能研究和临床应用公平性和包容性的建议。
J Biomed Inform. 2024 Sep;157:104693. doi: 10.1016/j.jbi.2024.104693. Epub 2024 Jul 15.
10
A scoping review of fair machine learning techniques when using real-world data.使用真实世界数据时公平机器学习技术的范围综述。
J Biomed Inform. 2024 Mar;151:104622. doi: 10.1016/j.jbi.2024.104622. Epub 2024 Mar 6.

本文引用的文献

1
A vision-language foundation model for the generation of realistic chest X-ray images.一种用于生成逼真胸部X光图像的视觉语言基础模型。
Nat Biomed Eng. 2025 Apr;9(4):494-506. doi: 10.1038/s41551-024-01246-y. Epub 2024 Aug 26.
2
Video-Based Deep Learning for Automated Assessment of Left Ventricular Ejection Fraction in Pediatric Patients.基于视频的深度学习在儿科患者左心室射血分数自动评估中的应用。
J Am Soc Echocardiogr. 2023 May;36(5):482-489. doi: 10.1016/j.echo.2023.01.015. Epub 2023 Feb 7.
3
Equitable precision medicine for type 2 diabetes.
2型糖尿病的公平精准医学
Lancet Digit Health. 2022 Dec;4(12):e850. doi: 10.1016/S2589-7500(22)00217-5.
4
Bias reduction in representation of histopathology images using deep feature selection.使用深度特征选择减少组织病理学图像表示中的偏差。
Sci Rep. 2022 Nov 21;12(1):19994. doi: 10.1038/s41598-022-24317-z.
5
Synthetic Medical Images for Robust, Privacy-Preserving Training of Artificial Intelligence: Application to Retinopathy of Prematurity Diagnosis.用于人工智能稳健、隐私保护训练的合成医学图像:在早产儿视网膜病变诊断中的应用
Ophthalmol Sci. 2022 Feb 11;2(2):100126. doi: 10.1016/j.xops.2022.100126. eCollection 2022 Jun.
6
Deepfakes in Ophthalmology: Applications and Realism of Synthetic Retinal Images from Generative Adversarial Networks.眼科中的深度伪造技术:生成对抗网络合成视网膜图像的应用与逼真度
Ophthalmol Sci. 2021 Nov 16;1(4):100079. doi: 10.1016/j.xops.2021.100079. eCollection 2021 Dec.
7
Cardiac aging synthesis from cross-sectional data with conditional generative adversarial networks.使用条件生成对抗网络从横断面数据进行心脏衰老合成。
Front Cardiovasc Med. 2022 Sep 23;9:983091. doi: 10.3389/fcvm.2022.983091. eCollection 2022.
8
Fair and Privacy-Preserving Alzheimer's Disease Diagnosis Based on Spontaneous Speech Analysis via Federated Learning.基于联邦学习的自发语音分析的公平且保护隐私的阿尔茨海默病诊断。
Annu Int Conf IEEE Eng Med Biol Soc. 2022 Jul;2022:1362-1365. doi: 10.1109/EMBC48229.2022.9871204.
9
Algorithmic fairness in computational medicine.计算医学中的算法公平性。
EBioMedicine. 2022 Oct;84:104250. doi: 10.1016/j.ebiom.2022.104250. Epub 2022 Sep 6.
10
Subpopulation-specific machine learning prognosis for underrepresented patients with double prioritized bias correction.针对代表性不足患者的亚群特异性机器学习预后分析及双重优先偏差校正
Commun Med (Lond). 2022 Sep 1;2:111. doi: 10.1038/s43856-022-00165-w. eCollection 2022.