• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

利用属性中立框架增强人工智能医疗系统的公平性。

Enhancing fairness in AI-enabled medical systems with the attribute neutral framework.

机构信息

The Data Center, Wuhan Children's Hospital (Wuhan Maternal and Child Healthcare Hospital), Tongji Medical College, Huazhong University of Science and Technology, Wuhan, 430016, Hubei, China.

Guangdong Cardiovascular Institute, Guangdong Provincial People's Hospital (Guangdong Academy of Medical Sciences), Guangzhou, 510080, Guangdong, China.

出版信息

Nat Commun. 2024 Oct 10;15(1):8767. doi: 10.1038/s41467-024-52930-1.

DOI:10.1038/s41467-024-52930-1
PMID:39384748
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC11464531/
Abstract

Questions of unfairness and inequity pose critical challenges to the successful deployment of artificial intelligence (AI) in healthcare settings. In AI models, unequal performance across protected groups may be partially attributable to the learning of spurious or otherwise undesirable correlations between sensitive attributes and disease-related information. Here, we introduce the Attribute Neutral Framework, designed to disentangle biased attributes from disease-relevant information and subsequently neutralize them to improve representation across diverse subgroups. Within the framework, we develop the Attribute Neutralizer (AttrNzr) to generate neutralized data, for which protected attributes can no longer be easily predicted by humans or by machine learning classifiers. We then utilize these data to train the disease diagnosis model (DDM). Comparative analysis with other unfairness mitigation algorithms demonstrates that AttrNzr outperforms in reducing the unfairness of the DDM while maintaining DDM's overall disease diagnosis performance. Furthermore, AttrNzr supports the simultaneous neutralization of multiple attributes and demonstrates utility even when applied solely during the training phase, without being used in the test phase. Moreover, instead of introducing additional constraints to the DDM, the AttrNzr directly addresses a root cause of unfairness, providing a model-independent solution. Our results with AttrNzr highlight the potential of data-centered and model-independent solutions for fairness challenges in AI-enabled medical systems.

摘要

不公平和不平等问题对人工智能(AI)在医疗保健环境中的成功部署构成了严峻挑战。在 AI 模型中,受保护群体之间的表现不平等可能部分归因于对敏感属性和与疾病相关信息之间的虚假或其他不良相关性的学习。在这里,我们引入了属性中立框架,旨在将有偏差的属性与疾病相关信息分离,并对其进行中和处理,以改善不同亚组的代表性。在该框架内,我们开发了属性中立化器(AttrNzr)来生成中性化数据,在这些数据中,人类或机器学习分类器再也无法轻易预测受保护属性。然后,我们利用这些数据来训练疾病诊断模型(DDM)。与其他不公平缓解算法的比较分析表明,AttrNzr 在降低 DDM 的不公平性的同时,保持了 DDM 的整体疾病诊断性能。此外,AttrNzr 支持同时对多个属性进行中和处理,即使仅在训练阶段使用,而不在测试阶段使用,也具有实用性。此外,AttrNzr 并没有对 DDM 引入额外的约束,而是直接解决了不公平问题的根本原因,提供了一种与模型无关的解决方案。我们使用 AttrNzr 的结果突出了以数据为中心和与模型无关的解决方案在 AI 支持的医疗系统中的公平性挑战方面的潜力。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2b9d/11464531/2f86ec70a30a/41467_2024_52930_Fig8_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2b9d/11464531/10c658dca198/41467_2024_52930_Fig1_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2b9d/11464531/c5d29a78adb3/41467_2024_52930_Fig2_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2b9d/11464531/690f017bd198/41467_2024_52930_Fig3_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2b9d/11464531/212ea07fc00c/41467_2024_52930_Fig4_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2b9d/11464531/68d98e8dc879/41467_2024_52930_Fig5_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2b9d/11464531/7b346bcdafc0/41467_2024_52930_Fig6_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2b9d/11464531/5fbcb769515a/41467_2024_52930_Fig7_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2b9d/11464531/2f86ec70a30a/41467_2024_52930_Fig8_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2b9d/11464531/10c658dca198/41467_2024_52930_Fig1_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2b9d/11464531/c5d29a78adb3/41467_2024_52930_Fig2_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2b9d/11464531/690f017bd198/41467_2024_52930_Fig3_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2b9d/11464531/212ea07fc00c/41467_2024_52930_Fig4_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2b9d/11464531/68d98e8dc879/41467_2024_52930_Fig5_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2b9d/11464531/7b346bcdafc0/41467_2024_52930_Fig6_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2b9d/11464531/5fbcb769515a/41467_2024_52930_Fig7_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2b9d/11464531/2f86ec70a30a/41467_2024_52930_Fig8_HTML.jpg

相似文献

1
Enhancing fairness in AI-enabled medical systems with the attribute neutral framework.利用属性中立框架增强人工智能医疗系统的公平性。
Nat Commun. 2024 Oct 10;15(1):8767. doi: 10.1038/s41467-024-52930-1.
2
"Shortcuts" Causing Bias in Radiology Artificial Intelligence: Causes, Evaluation, and Mitigation.“捷径”导致放射科人工智能产生偏见:原因、评估和缓解。
J Am Coll Radiol. 2023 Sep;20(9):842-851. doi: 10.1016/j.jacr.2023.06.025. Epub 2023 Jul 27.
3
Artificial intelligence for breast cancer detection and its health technology assessment: A scoping review.用于乳腺癌检测的人工智能及其健康技术评估:一项范围综述。
Comput Biol Med. 2025 Jan;184:109391. doi: 10.1016/j.compbiomed.2024.109391. Epub 2024 Nov 22.
4
Fairness of artificial intelligence in healthcare: review and recommendations.人工智能在医疗保健中的公平性:综述与建议。
Jpn J Radiol. 2024 Jan;42(1):3-15. doi: 10.1007/s11604-023-01474-3. Epub 2023 Aug 4.
5
Towards fairness-aware and privacy-preserving enhanced collaborative learning for healthcare.迈向用于医疗保健的公平感知与隐私保护增强型协作学习
Nat Commun. 2025 Mar 23;16(1):2852. doi: 10.1038/s41467-025-58055-3.
6
A roadmap to artificial intelligence (AI): Methods for designing and building AI ready data to promote fairness.人工智能(AI)路线图:设计和构建 AI 就绪数据的方法,以促进公平性。
J Biomed Inform. 2024 Jun;154:104654. doi: 10.1016/j.jbi.2024.104654. Epub 2024 May 11.
7
Detecting shortcut learning for fair medical AI using shortcut testing.使用捷径测试检测公平医疗 AI 的捷径学习。
Nat Commun. 2023 Jul 18;14(1):4314. doi: 10.1038/s41467-023-39902-7.
8
Artificial intelligence in healthcare: a primer for medical education in radiomics.人工智能在医疗保健中的应用:放射组学医学教育入门
Per Med. 2022 Sep;19(5):445-456. doi: 10.2217/pme-2022-0014. Epub 2022 Jul 26.
9
AI for all: bridging data gaps in machine learning and health.全民人工智能:弥合机器学习与健康领域的数据差距。
Transl Behav Med. 2025 Jan 16;15(1). doi: 10.1093/tbm/ibae075.
10
Bias Mitigation in Primary Health Care Artificial Intelligence Models: Scoping Review.初级卫生保健人工智能模型中的偏差缓解:范围综述
J Med Internet Res. 2025 Jan 7;27:e60269. doi: 10.2196/60269.

引用本文的文献

1
Medical laboratory data-based models: opportunities, obstacles, and solutions.基于医学实验室数据的模型:机遇、障碍与解决方案。
J Transl Med. 2025 Jul 24;23(1):823. doi: 10.1186/s12967-025-06802-x.
2
Optimizing Cancer Treatment: Exploring the Role of AI in Radioimmunotherapy.优化癌症治疗:探索人工智能在放射免疫治疗中的作用。
Diagnostics (Basel). 2025 Feb 6;15(3):397. doi: 10.3390/diagnostics15030397.

本文引用的文献

1
Detecting shortcut learning for fair medical AI using shortcut testing.使用捷径测试检测公平医疗 AI 的捷径学习。
Nat Commun. 2023 Jul 18;14(1):4314. doi: 10.1038/s41467-023-39902-7.
2
Algorithmic fairness in artificial intelligence for medicine and healthcare.人工智能在医学和医疗保健中的算法公平性。
Nat Biomed Eng. 2023 Jun;7(6):719-742. doi: 10.1038/s41551-023-01056-8. Epub 2023 Jun 28.
3
AI recognition of patient race in medical imaging: a modelling study.人工智能识别医学影像中的患者种族:一项建模研究。
Lancet Digit Health. 2022 Jun;4(6):e406-e414. doi: 10.1016/S2589-7500(22)00063-2. Epub 2022 May 11.
4
A proposed artificial intelligence workflow to address application challenges leveraged on algorithm uncertainty.一种旨在应对应用挑战的人工智能工作流程,该流程利用了算法的不确定性。
iScience. 2022 Feb 21;25(3):103961. doi: 10.1016/j.isci.2022.103961. eCollection 2022 Mar 18.
5
Underdiagnosis bias of artificial intelligence algorithms applied to chest radiographs in under-served patient populations.人工智能算法应用于服务不足患者人群的胸部 X 光片时的漏诊偏倚。
Nat Med. 2021 Dec;27(12):2176-2182. doi: 10.1038/s41591-021-01595-0. Epub 2021 Dec 10.
6
Gender imbalance in medical imaging datasets produces biased classifiers for computer-aided diagnosis.医学影像数据集的性别失衡会导致计算机辅助诊断的分类器产生偏差。
Proc Natl Acad Sci U S A. 2020 Jun 9;117(23):12592-12594. doi: 10.1073/pnas.1919012117. Epub 2020 May 26.
7
Treating health disparities with artificial intelligence.用人工智能解决健康差异问题。
Nat Med. 2020 Jan;26(1):16-17. doi: 10.1038/s41591-019-0649-2.
8
MIMIC-CXR, a de-identified publicly available database of chest radiographs with free-text reports.MIMIC-CXR,一个去标识化的、公开可用的、包含自由文本报告的胸部 X 光数据库。
Sci Data. 2019 Dec 12;6(1):317. doi: 10.1038/s41597-019-0322-0.
9
AttGAN: Facial Attribute Editing by Only Changing What You Want.AttGAN:仅通过改变你想要改变的内容来进行面部属性编辑。
IEEE Trans Image Process. 2019 Nov;28(11):5464-5478. doi: 10.1109/TIP.2019.2916751. Epub 2019 May 20.
10
Cardiologist-level arrhythmia detection and classification in ambulatory electrocardiograms using a deep neural network.使用深度神经网络在动态心电图中进行心脏病学家级别的心律失常检测和分类。
Nat Med. 2019 Jan;25(1):65-69. doi: 10.1038/s41591-018-0268-3. Epub 2019 Jan 7.