Suppr超能文献

提高电子健康记录人工智能模型的公平性:联邦学习方法的案例

Improving Fairness in AI Models on Electronic Health Records: The Case for Federated Learning Methods.

作者信息

Poulain Raphael, Tarek Mirza Farhan Bin, Beheshti Rahmatollah

机构信息

University of Delaware USA.

出版信息

FAccT 23 (2023). 2023 Jun;2023:1599-1608. doi: 10.1145/3593013.3594102. Epub 2023 Jun 12.

Abstract

Developing AI tools that preserve fairness is of critical importance, specifically in high-stakes applications such as those in healthcare. However, health AI models' overall prediction performance is often prioritized over the possible biases such models could have. In this study, we show one possible approach to mitigate bias concerns by having healthcare institutions collaborate through a federated learning paradigm (FL; which is a popular choice in healthcare settings). While FL methods with an emphasis on fairness have been previously proposed, their underlying model and local implementation techniques, as well as their possible applications to the healthcare domain remain widely underinvestigated. Therefore, we propose a comprehensive FL approach with adversarial debiasing and a fair aggregation method, suitable to various fairness metrics, in the healthcare domain where electronic health records are used. Not only our approach explicitly mitigates bias as part of the optimization process, but an FL-based paradigm would also implicitly help with addressing data imbalance and increasing the data size, offering a practical solution for healthcare applications. We empirically demonstrate our method's superior performance on multiple experiments simulating large-scale real-world scenarios and compare it to several baselines. Our method has achieved promising fairness performance with the lowest impact on overall discrimination performance (accuracy). Our code is available at https://github.com/healthylaife/FairFedAvg.

摘要

开发确保公平性的人工智能工具至关重要,特别是在医疗保健等高风险应用领域。然而,健康人工智能模型的整体预测性能往往比此类模型可能存在的偏差更受重视。在本研究中,我们展示了一种可能的方法,即通过让医疗机构通过联邦学习范式(FL;这是医疗保健环境中的一种常用选择)进行协作,来减轻对偏差的担忧。虽然此前已经提出了强调公平性的FL方法,但其基础模型和本地实施技术,以及它们在医疗保健领域的可能应用仍未得到充分研究。因此,我们提出了一种全面的FL方法,该方法具有对抗性去偏和公平聚合方法,适用于使用电子健康记录的医疗保健领域中的各种公平性指标。我们的方法不仅在优化过程中明确减轻了偏差,而且基于FL的范式还将在隐式层面上有助于解决数据不平衡问题并增加数据量,为医疗保健应用提供了切实可行的解决方案。我们通过模拟大规模现实场景的多项实验,实证证明了我们方法的卓越性能,并将其与多个基线进行了比较。我们的方法在对整体歧视性能(准确性)影响最小的情况下,实现了令人满意的公平性表现。我们的代码可在https://github.com/healthylaife/FairFedAvg获取。

相似文献

2
Analyzing the Impact of Personalization on Fairness in Federated Learning for Healthcare.分析个性化对医疗保健联邦学习公平性的影响。
J Healthc Inform Res. 2024 Mar 23;8(2):181-205. doi: 10.1007/s41666-024-00164-7. eCollection 2024 Jun.
3
Unified fair federated learning for digital healthcare.用于数字医疗保健的统一公平联邦学习
Patterns (N Y). 2023 Dec 28;5(1):100907. doi: 10.1016/j.patter.2023.100907. eCollection 2024 Jan 12.

引用本文的文献

1
Lessons from complex systems science for AI governance.复杂系统科学对人工智能治理的启示。
Patterns (N Y). 2025 Aug 1;6(8):101341. doi: 10.1016/j.patter.2025.101341. eCollection 2025 Aug 8.
7
AI-driven healthcare: Fairness in AI healthcare: A survey.人工智能驱动的医疗保健:人工智能医疗保健中的公平性:一项调查。
PLOS Digit Health. 2025 May 20;4(5):e0000864. doi: 10.1371/journal.pdig.0000864. eCollection 2025 May.
8
On the conversational persuasiveness of GPT-4.论GPT-4的对话说服力。
Nat Hum Behav. 2025 May 19. doi: 10.1038/s41562-025-02194-6.

本文引用的文献

6
Federated Learning for Healthcare Informatics.医疗信息学中的联邦学习
J Healthc Inform Res. 2021;5(1):1-19. doi: 10.1007/s41666-020-00082-4. Epub 2020 Nov 12.
9
Ensuring Fairness in Machine Learning to Advance Health Equity.确保机器学习的公正性,以促进健康公平。
Ann Intern Med. 2018 Dec 18;169(12):866-872. doi: 10.7326/M18-1990. Epub 2018 Dec 4.

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验