• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

一种用于减轻临床机器学习中算法偏差的对抗训练框架。

An adversarial training framework for mitigating algorithmic biases in clinical machine learning.

作者信息

Yang Jenny, Soltan Andrew A S, Eyre David W, Yang Yang, Clifton David A

机构信息

Institute of Biomedical Engineering, Department of Engineering Science, University of Oxford, Oxford, England.

John Radcliffe Hospital, Oxford University Hospitals NHS Foundation Trust, Oxford, England.

出版信息

NPJ Digit Med. 2023 Mar 29;6(1):55. doi: 10.1038/s41746-023-00805-y.

DOI:10.1038/s41746-023-00805-y
PMID:36991077
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC10050816/
Abstract

Machine learning is becoming increasingly prominent in healthcare. Although its benefits are clear, growing attention is being given to how these tools may exacerbate existing biases and disparities. In this study, we introduce an adversarial training framework that is capable of mitigating biases that may have been acquired through data collection. We demonstrate this proposed framework on the real-world task of rapidly predicting COVID-19, and focus on mitigating site-specific (hospital) and demographic (ethnicity) biases. Using the statistical definition of equalized odds, we show that adversarial training improves outcome fairness, while still achieving clinically-effective screening performances (negative predictive values >0.98). We compare our method to previous benchmarks, and perform prospective and external validation across four independent hospital cohorts. Our method can be generalized to any outcomes, models, and definitions of fairness.

摘要

机器学习在医疗保健领域正变得越来越突出。尽管其好处显而易见,但人们越来越关注这些工具可能如何加剧现有的偏见和差异。在本研究中,我们引入了一个对抗训练框架,该框架能够减轻在数据收集过程中可能产生的偏见。我们在快速预测新冠肺炎这一现实任务中展示了该框架,并着重减轻特定地点(医院)和人口统计学(种族)方面的偏见。使用均等赔率的统计定义,我们表明对抗训练提高了结果公平性,同时仍能实现临床有效的筛查性能(阴性预测值>0.98)。我们将我们的方法与之前的基准进行比较,并在四个独立的医院队列中进行前瞻性和外部验证。我们的方法可以推广到任何结果、模型和公平性定义。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5d2d/10060422/4131e3c06039/41746_2023_805_Fig4_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5d2d/10060422/a3b8d7e1ee19/41746_2023_805_Fig1_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5d2d/10060422/4b6ee5ac7ccc/41746_2023_805_Fig2_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5d2d/10060422/31e03f353a18/41746_2023_805_Fig3_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5d2d/10060422/4131e3c06039/41746_2023_805_Fig4_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5d2d/10060422/a3b8d7e1ee19/41746_2023_805_Fig1_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5d2d/10060422/4b6ee5ac7ccc/41746_2023_805_Fig2_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5d2d/10060422/31e03f353a18/41746_2023_805_Fig3_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5d2d/10060422/4131e3c06039/41746_2023_805_Fig4_HTML.jpg

相似文献

1
An adversarial training framework for mitigating algorithmic biases in clinical machine learning.一种用于减轻临床机器学习中算法偏差的对抗训练框架。
NPJ Digit Med. 2023 Mar 29;6(1):55. doi: 10.1038/s41746-023-00805-y.
2
Algorithmic fairness and bias mitigation for clinical machine learning with deep reinforcement learning.基于深度强化学习的临床机器学习中的算法公平性与偏差缓解
Nat Mach Intell. 2023;5(8):884-894. doi: 10.1038/s42256-023-00697-3. Epub 2023 Jul 31.
3
Enhancing Fairness in Disease Prediction by Optimizing Multiple Domain Adversarial Networks.通过优化多域对抗网络提高疾病预测的公平性
bioRxiv. 2023 Aug 26:2023.08.04.551906. doi: 10.1101/2023.08.04.551906.
4
Analyzing the Impact of Personalization on Fairness in Federated Learning for Healthcare.分析个性化对医疗保健联邦学习公平性的影响。
J Healthc Inform Res. 2024 Mar 23;8(2):181-205. doi: 10.1007/s41666-024-00164-7. eCollection 2024 Jun.
5
Architectural Design of a Blockchain-Enabled, Federated Learning Platform for Algorithmic Fairness in Predictive Health Care: Design Science Study.区块链赋能的联邦学习平台的架构设计用于预测性医疗保健中的算法公平性:设计科学研究。
J Med Internet Res. 2023 Oct 30;25:e46547. doi: 10.2196/46547.
6
Call for algorithmic fairness to mitigate amplification of racial biases in artificial intelligence models used in orthodontics and craniofacial health.呼吁算法公平性以减轻在口腔正畸学和颅面健康中使用的人工智能模型中种族偏见的放大。
Orthod Craniofac Res. 2023 Dec;26 Suppl 1:124-130. doi: 10.1111/ocr.12721. Epub 2023 Oct 17.
7
FERI: A Multitask-based Fairness Achieving Algorithm with Applications to Fair Organ Transplantation.FERI:一种基于多任务的公平性实现算法及其在公平器官移植中的应用
AMIA Jt Summits Transl Sci Proc. 2024 May 31;2024:593-602. eCollection 2024.
8
Algorithmic Individual Fairness and Healthcare: A Scoping Review.算法个体公平性与医疗保健:一项范围综述
medRxiv. 2024 Mar 26:2024.03.25.24304853. doi: 10.1101/2024.03.25.24304853.
9
Mitigating machine learning bias between high income and low-middle income countries for enhanced model fairness and generalizability.减轻高收入和中低收入国家之间机器学习的偏差,以提高模型的公平性和泛化能力。
Sci Rep. 2024 Jun 10;14(1):13318. doi: 10.1038/s41598-024-64210-5.
10
Predictably unequal: understanding and addressing concerns that algorithmic clinical prediction may increase health disparities.可预见的不平等:理解并解决有关算法临床预测可能加剧健康差异的担忧。
NPJ Digit Med. 2020 Jul 30;3:99. doi: 10.1038/s41746-020-0304-9. eCollection 2020.

引用本文的文献

1
Bias in predictive models for vitreoretinal diseases: ethnic and socioeconomic disparities in artificial intelligence.玻璃体视网膜疾病预测模型中的偏差:人工智能中的种族和社会经济差异
Eye (Lond). 2025 Sep 9. doi: 10.1038/s41433-025-03990-0.
2
Large language models for clinical decision support in gastroenterology and hepatology.用于胃肠病学和肝病学临床决策支持的大语言模型
Nat Rev Gastroenterol Hepatol. 2025 Aug 22. doi: 10.1038/s41575-025-01108-1.
3
Equity-enhanced glaucoma progression prediction from OCT with knowledge distillation.

本文引用的文献

1
Deep reinforcement learning for multi-class imbalanced training: applications in healthcare.用于多类不平衡训练的深度强化学习:在医疗保健中的应用
Mach Learn. 2024;113(5):2655-2674. doi: 10.1007/s10994-023-06481-z. Epub 2023 Nov 28.
2
Algorithmic fairness and bias mitigation for clinical machine learning with deep reinforcement learning.基于深度强化学习的临床机器学习中的算法公平性与偏差缓解
Nat Mach Intell. 2023;5(8):884-894. doi: 10.1038/s42256-023-00697-3. Epub 2023 Jul 31.
3
Machine learning generalizability across healthcare settings: insights from multi-site COVID-19 screening.
通过知识蒸馏从光学相干断层扫描(OCT)中增强公平性的青光眼进展预测
NPJ Digit Med. 2025 Jul 24;8(1):477. doi: 10.1038/s41746-025-01884-9.
4
Clinical Algorithms and the Legacy of Race-Based Correction: Historical Errors, Contemporary Revisions and Equity-Oriented Methodologies for Epidemiologists.临床算法与基于种族校正的遗产:历史错误、当代修订以及面向公平的流行病学家方法学
Clin Epidemiol. 2025 Jul 12;17:647-662. doi: 10.2147/CLEP.S527000. eCollection 2025.
5
The ethics of data mining in healthcare: challenges, frameworks, and future directions.医疗保健领域数据挖掘的伦理问题:挑战、框架及未来方向。
BioData Min. 2025 Jul 11;18(1):47. doi: 10.1186/s13040-025-00461-w.
6
Benchmarking the AI-based diagnostic potential of plasma proteomics for neurodegenerative disease in 17,170 people.对17170人的血浆蛋白质组学用于神经退行性疾病的基于人工智能的诊断潜力进行基准测试。
medRxiv. 2025 Jul 1:2025.06.27.25330344. doi: 10.1101/2025.06.27.25330344.
7
Rethinking deep learning in bioimaging through a data centric lens.从以数据为中心的视角重新思考生物成像中的深度学习。
Npj Imaging. 2025 Jun 26;3(1):29. doi: 10.1038/s44303-025-00092-0.
8
Equitable Deep Learning for Diabetic Retinopathy Detection Using Multidimensional Retinal Imaging With Fair Adaptive Scaling.使用具有公平自适应缩放的多维视网膜成像进行糖尿病视网膜病变检测的公平深度学习
Transl Vis Sci Technol. 2025 Jul 1;14(7):1. doi: 10.1167/tvst.14.7.1.
9
AI-driven multimodal colorimetric analytics for biomedical and behavioral health diagnostics.用于生物医学和行为健康诊断的人工智能驱动的多模态比色分析
Comput Struct Biotechnol J. 2025 May 28;27:2219-2232. doi: 10.1016/j.csbj.2025.05.015. eCollection 2025.
10
A scoping review and evidence gap analysis of clinical AI fairness.临床人工智能公平性的范围综述与证据差距分析
NPJ Digit Med. 2025 Jun 14;8(1):360. doi: 10.1038/s41746-025-01667-2.
跨医疗环境的机器学习可推广性:来自多地点新冠病毒筛查的见解
NPJ Digit Med. 2022 Jun 7;5(1):69. doi: 10.1038/s41746-022-00614-9.
4
Real-world evaluation of rapid and laboratory-free COVID-19 triage for emergency care: external validation and pilot deployment of artificial intelligence driven screening.真实世界环境下的 COVID-19 快速、无实验室分诊用于紧急护理的评估:人工智能驱动的筛查的外部验证和试点部署。
Lancet Digit Health. 2022 Apr;4(4):e266-e278. doi: 10.1016/S2589-7500(21)00272-7. Epub 2022 Mar 9.
5
Sensitivity of RT-PCR testing of upper respiratory tract samples for SARS-CoV-2 in hospitalised patients: a retrospective cohort study.住院患者中SARS-CoV-2上呼吸道样本RT-PCR检测的敏感性:一项回顾性队列研究。
Wellcome Open Res. 2022 Feb 1;5:254. doi: 10.12688/wellcomeopenres.16342.2. eCollection 2020.
6
Underdiagnosis bias of artificial intelligence algorithms applied to chest radiographs in under-served patient populations.人工智能算法应用于服务不足患者人群的胸部 X 光片时的漏诊偏倚。
Nat Med. 2021 Dec;27(12):2176-2182. doi: 10.1038/s41591-021-01595-0. Epub 2021 Dec 10.
7
Federated learning for predicting clinical outcomes in patients with COVID-19.基于联邦学习的 COVID-19 患者临床结局预测
Nat Med. 2021 Oct;27(10):1735-1743. doi: 10.1038/s41591-021-01506-3. Epub 2021 Sep 15.
8
Synthetic data in machine learning for medicine and healthcare.机器学习在医学和医疗保健领域中的合成数据。
Nat Biomed Eng. 2021 Jun;5(6):493-497. doi: 10.1038/s41551-021-00751-8.
9
Rapid triage for COVID-19 using routine clinical data for patients attending hospital: development and prospective validation of an artificial intelligence screening test.利用常规临床数据对就诊患者进行 COVID-19 的快速分诊:人工智能筛查测试的开发和前瞻性验证。
Lancet Digit Health. 2021 Feb;3(2):e78-e87. doi: 10.1016/S2589-7500(20)30274-0. Epub 2020 Dec 11.
10
Clinical sensitivity and interpretation of PCR and serological COVID-19 diagnostics for patients presenting to the hospital.临床灵敏度和对住院患者的 PCR 和血清学 COVID-19 诊断的解读。
FASEB J. 2020 Oct;34(10):13877-13884. doi: 10.1096/fj.202001700RR. Epub 2020 Aug 28.