• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

医疗保健领域机器学习分类器后处理去偏方法的实证比较

Empirical Comparison of Post-processing Debiasing Methods for Machine Learning Classifiers in Healthcare.

作者信息

Dang Vien Ngoc, Campello Víctor M, Hernández-González Jerónimo, Lekadir Karim

机构信息

Departament de Matemàtiques i Informàtica, Universitat de Barcelona, Barcelona, Spain.

Departament d'Informàtica, Matemàtica Aplicada i Estadística, Universitat de Girona, Girona, Spain.

出版信息

J Healthc Inform Res. 2025 Mar 20;9(3):465-493. doi: 10.1007/s41666-025-00196-7. eCollection 2025 Sep.

DOI:10.1007/s41666-025-00196-7
PMID:40726749
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC12290158/
Abstract

UNLABELLED

Machine learning classifiers in healthcare tend to reproduce or exacerbate existing health disparities due to inherent biases in training data. This relevant issue has brought the attention of researchers in both healthcare and other domains, proposing techniques that deal with it in different stages of the machine learning process. Post-processing methods adjust model predictions to ensure fairness without interfering in the learning process nor requiring access to the original training data, preserving privacy and enabling the application to any trained model. This study rigorously compares state-of-the-art debiasing methods within the family of post-processing techniques across a wide range of synthetic and real-world (healthcare) datasets, by means of different performance and fairness metrics. Our experiments reveal the strengths and weaknesses of each method, examining the trade-offs between group fairness and predictive performance, as well as among different notions of group fairness. Additionally, we analyze the impact on untreated attributes to ensure overall bias mitigation. Our comprehensive evaluation provides insights into how these debiasing methods can be optimally implemented in healthcare settings to balance accuracy and fairness.

SUPPLEMENTARY INFORMATION

The online version contains supplementary material available at 10.1007/s41666-025-00196-7.

摘要

未标注

由于训练数据中存在固有偏差,医疗保健领域的机器学习分类器往往会重现或加剧现有的健康差距。这一相关问题引起了医疗保健和其他领域研究人员的关注,他们提出了在机器学习过程的不同阶段处理该问题的技术。后处理方法调整模型预测以确保公平性,而不会干扰学习过程,也无需访问原始训练数据,从而保护隐私并使该方法能够应用于任何经过训练的模型。本研究通过不同的性能和公平性指标,在广泛的合成数据集和真实世界(医疗保健)数据集中,对后处理技术家族中的先进去偏方法进行了严格比较。我们的实验揭示了每种方法的优缺点,研究了群体公平性与预测性能之间以及不同群体公平性概念之间的权衡。此外,我们分析了对未处理属性的影响,以确保总体偏差得到缓解。我们的综合评估提供了关于如何在医疗保健环境中最佳实施这些去偏方法以平衡准确性和公平性的见解。

补充信息

在线版本包含可在10.1007/s41666-025-00196-7获取的补充材料。

相似文献

1
Empirical Comparison of Post-processing Debiasing Methods for Machine Learning Classifiers in Healthcare.医疗保健领域机器学习分类器后处理去偏方法的实证比较
J Healthc Inform Res. 2025 Mar 20;9(3):465-493. doi: 10.1007/s41666-025-00196-7. eCollection 2025 Sep.
2
A Responsible Framework for Assessing, Selecting, and Explaining Machine Learning Models in Cardiovascular Disease Outcomes Among People With Type 2 Diabetes: Methodology and Validation Study.用于评估、选择和解释2型糖尿病患者心血管疾病结局机器学习模型的责任框架:方法与验证研究
JMIR Med Inform. 2025 Jun 27;13:e66200. doi: 10.2196/66200.
3
Short-Term Memory Impairment短期记忆障碍
4
Measures implemented in the school setting to contain the COVID-19 pandemic.学校为控制 COVID-19 疫情而采取的措施。
Cochrane Database Syst Rev. 2022 Jan 17;1(1):CD015029. doi: 10.1002/14651858.CD015029.
5
Comparison of Two Modern Survival Prediction Tools, SORG-MLA and METSSS, in Patients With Symptomatic Long-bone Metastases Who Underwent Local Treatment With Surgery Followed by Radiotherapy and With Radiotherapy Alone.两种现代生存预测工具 SORG-MLA 和 METSSS 在接受手术联合放疗和单纯放疗治疗有症状长骨转移患者中的比较。
Clin Orthop Relat Res. 2024 Dec 1;482(12):2193-2208. doi: 10.1097/CORR.0000000000003185. Epub 2024 Jul 23.
6
The Lived Experience of Autistic Adults in Employment: A Systematic Search and Synthesis.成年自闭症患者的就业生活经历:系统检索与综述
Autism Adulthood. 2024 Dec 2;6(4):495-509. doi: 10.1089/aut.2022.0114. eCollection 2024 Dec.
7
Stabilizing machine learning for reproducible and explainable results: A novel validation approach to subject-specific insights.稳定机器学习以获得可重复和可解释的结果:一种针对特定个体见解的新型验证方法。
Comput Methods Programs Biomed. 2025 Jun 21;269:108899. doi: 10.1016/j.cmpb.2025.108899.
8
Psychological therapies for panic disorder with or without agoraphobia in adults: a network meta-analysis.成人伴或不伴有广场恐惧症的惊恐障碍的心理治疗:一项网状荟萃分析。
Cochrane Database Syst Rev. 2016 Apr 13;4(4):CD011004. doi: 10.1002/14651858.CD011004.pub2.
9
Falls prevention interventions for community-dwelling older adults: systematic review and meta-analysis of benefits, harms, and patient values and preferences.社区居住的老年人跌倒预防干预措施:系统评价和荟萃分析的益处、危害以及患者的价值观和偏好。
Syst Rev. 2024 Nov 26;13(1):289. doi: 10.1186/s13643-024-02681-3.
10
The effect of sample site and collection procedure on identification of SARS-CoV-2 infection.样本采集部位和采集程序对严重急性呼吸综合征冠状病毒2(SARS-CoV-2)感染鉴定的影响。
Cochrane Database Syst Rev. 2024 Dec 16;12(12):CD014780. doi: 10.1002/14651858.CD014780.

本文引用的文献

1
Bias in medical AI: Implications for clinical decision-making.医学人工智能中的偏差:对临床决策的影响。
PLOS Digit Health. 2024 Nov 7;3(11):e0000651. doi: 10.1371/journal.pdig.0000651. eCollection 2024 Nov.
2
Challenges in Reducing Bias Using Post-Processing Fairness for Breast Cancer Stage Classification with Deep Learning.使用深度学习对乳腺癌分期分类进行后处理公平性以减少偏差时面临的挑战。
Algorithms. 2024 Apr;17(4). doi: 10.3390/a17040141. Epub 2024 Mar 28.
3
Fairness and bias correction in machine learning for depression prediction across four study populations.
机器学习在预测抑郁症中的公平性和偏差校正——跨越四个研究人群。
Sci Rep. 2024 Apr 3;14(1):7848. doi: 10.1038/s41598-024-58427-7.
4
Translating Intersectionality to Fair Machine Learning in Health Sciences.将交叉性理论应用于健康科学中的公平机器学习
Nat Mach Intell. 2023 May;5(5):476-479. doi: 10.1038/s42256-023-00651-3. Epub 2023 Apr 28.
5
Evaluation and Mitigation of Racial Bias in Clinical Machine Learning Models: Scoping Review.临床机器学习模型中种族偏见的评估与缓解:范围综述
JMIR Med Inform. 2022 May 31;10(5):e36388. doi: 10.2196/36388.
6
Underdiagnosis bias of artificial intelligence algorithms applied to chest radiographs in under-served patient populations.人工智能算法应用于服务不足患者人群的胸部 X 光片时的漏诊偏倚。
Nat Med. 2021 Dec;27(12):2176-2182. doi: 10.1038/s41591-021-01595-0. Epub 2021 Dec 10.
7
Ethical Machine Learning in Healthcare.医疗保健中的伦理机器学习。
Annu Rev Biomed Data Sci. 2021 Jul;4:123-144. doi: 10.1146/annurev-biodatasci-092820-114757. Epub 2021 May 6.
8
Comparison of Methods to Reduce Bias From Clinical Prediction Models of Postpartum Depression.比较降低产后抑郁临床预测模型偏倚的方法。
JAMA Netw Open. 2021 Apr 1;4(4):e213909. doi: 10.1001/jamanetworkopen.2021.3909.
9
Conscientious Classification: A Data Scientist's Guide to Discrimination-Aware Classification.尽责分类:数据科学家的歧视感知分类指南。
Big Data. 2017 Jun;5(2):120-134. doi: 10.1089/big.2016.0048.
10
Why is depression more prevalent in women?为什么抑郁症在女性中更为普遍?
J Psychiatry Neurosci. 2015 Jul;40(4):219-21. doi: 10.1503/jpn.150205.