• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

减轻人工智能驱动的心血管成像中的算法偏差以实现更公平的诊断。

Mitigating Algorithmic Bias in AI-Driven Cardiovascular Imaging for Fairer Diagnostics.

作者信息

Sufian Md Abu, Alsadder Lujain, Hamzi Wahiba, Zaman Sadia, Sagar A S M Sharifuzzaman, Hamzi Boumediene

机构信息

IVR Low-Carbon Research Institute, Chang'an University, Xi'an 710018, China.

School of Computing and Mathematical Sciences, University of Leicester, Leichester LE1 7RH, UK.

出版信息

Diagnostics (Basel). 2024 Nov 27;14(23):2675. doi: 10.3390/diagnostics14232675.

DOI:10.3390/diagnostics14232675
PMID:39682584
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC11640708/
Abstract

: The research addresses algorithmic bias in deep learning models for cardiovascular risk prediction, focusing on fairness across demographic and socioeconomic groups to mitigate health disparities. It integrates fairness-aware algorithms, susceptible carrier-infected-recovered (SCIR) models, and interpretability frameworks to combine fairness with actionable AI insights supported by robust segmentation and classification metrics. : The research utilised quantitative 3D/4D heart magnetic resonance imaging and tabular datasets from the Cardiac Atlas Project's (CAP) open challenges to explore AI-driven methodologies for mitigating algorithmic bias in cardiac imaging. The SCIR model, known for its robustness, was adapted with the Capuchin algorithm, adversarial debiasing, Fairlearn, and post-processing with equalised odds. The robustness of the SCIR model was further demonstrated in the fairness evaluation metrics, which included demographic parity, equal opportunity difference (0.037), equalised odds difference (0.026), disparate impact (1.081), and Theil Index (0.249). For interpretability, YOLOv5, Mask R-CNN, and ResNet18 were implemented with LIME and SHAP. Bias mitigation improved disparate impact (0.80 to 0.95), reduced equal opportunity difference (0.20 to 0.05), and decreased false favourable rates for males (0.0059 to 0.0033) and females (0.0096 to 0.0064) through balanced probability adjustment. : The SCIR model outperformed the SIR model (recovery rate: 1.38 vs 0.83) with a -10% transmission bias impact. Parameters (β=0.5, δ=0.2, γ=0.15) reduced susceptible counts to 2.53×10-12 and increased recovered counts to 9.98 by t=50. YOLOv5 achieved high Intersection over Union (IoU) scores (94.8%, 93.7%, 80.6% for normal, severe, and abnormal cases). Mask R-CNN showed 82.5% peak confidence, while ResNet demonstrated a 10.4% accuracy drop under noise. Performance metrics (IoU: 0.91-0.96, Dice: 0.941-0.980, Kappa: 0.95) highlighted strong predictive accuracy and reliability. : The findings validate the effectiveness of fairness-aware algorithms in addressing cardiovascular predictive model biases. The integration of fairness and explainable AI not only promotes equitable diagnostic precision but also significantly reduces diagnostic disparities across vulnerable populations. This reduction in disparities is a key outcome of the research, enhancing clinical trust in AI-driven systems. The promising results of this study pave the way for future work that will explore scalability in real-world clinical settings and address limitations such as computational complexity in large-scale data processing.

摘要

该研究探讨了用于心血管风险预测的深度学习模型中的算法偏差,重点关注不同人口统计学和社会经济群体之间的公平性,以减轻健康差距。它整合了公平感知算法、易感-感染-康复(SCIR)模型和可解释性框架,将公平性与由强大的分割和分类指标支持的可操作人工智能见解相结合。

该研究利用来自心脏图谱项目(CAP)开放挑战的定量3D/4D心脏磁共振成像和表格数据集,探索用于减轻心脏成像中算法偏差的人工智能驱动方法。以其稳健性而闻名的SCIR模型通过卷尾猴算法、对抗性去偏、Fairlearn以及采用均等赔率的后处理进行了调整。SCIR模型的稳健性在公平性评估指标中得到进一步证明,这些指标包括人口统计学均等、平等机会差异(0.037)、均等赔率差异(0.026)、差异影响(1.081)和泰尔指数(0.249)。为了实现可解释性,使用LIME和SHAP实现了YOLOv5、Mask R-CNN和ResNet18。通过平衡概率调整,偏差减轻改善了差异影响(从0.80降至0.95),降低了平等机会差异(从0.20降至0.05),并降低了男性(从0.0059降至0.0033)和女性(从0.0096降至0.0064)的假阳性率。

SCIR模型在传输偏差影响为-10%的情况下优于SIR模型(恢复率:1.38对0.83)。参数(β=0.5,δ=0.2,γ=0.15)在t=50时将易感计数降至2.53×10-12,并将康复计数增至9.98。YOLOv5实现了较高的交并比(IoU)分数(正常、严重和异常病例分别为94.8%、93.7%、80.6%)。Mask R-CNN显示出82.5%的峰值置信度,而ResNet在噪声下准确率下降了10.4%。性能指标(IoU:0.91-0.96,Dice:0.941-0.980,Kappa:0.95)突出了强大的预测准确性和可靠性。

这些发现验证了公平感知算法在解决心血管预测模型偏差方面的有效性。公平性与可解释人工智能的整合不仅促进了公平的诊断精度,还显著减少了弱势群体之间的诊断差距。这种差距的减少是该研究的关键成果,增强了对人工智能驱动系统的临床信任。这项研究的 promising 结果为未来的工作铺平了道路,未来工作将探索在实际临床环境中的可扩展性,并解决诸如大规模数据处理中的计算复杂性等局限性。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/236b/11640708/b1538574c66d/diagnostics-14-02675-g021.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/236b/11640708/c88ff7ab1138/diagnostics-14-02675-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/236b/11640708/7b60e6a5694f/diagnostics-14-02675-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/236b/11640708/9e28a6bd657a/diagnostics-14-02675-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/236b/11640708/c34ce5e08633/diagnostics-14-02675-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/236b/11640708/a0efa82d4c96/diagnostics-14-02675-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/236b/11640708/9a4d02628a1b/diagnostics-14-02675-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/236b/11640708/d323f76d828a/diagnostics-14-02675-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/236b/11640708/5f50e38a4e49/diagnostics-14-02675-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/236b/11640708/c6b5c3685535/diagnostics-14-02675-g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/236b/11640708/a9644d801147/diagnostics-14-02675-g010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/236b/11640708/3d2e7f35c43a/diagnostics-14-02675-g011.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/236b/11640708/ede836962f44/diagnostics-14-02675-g012.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/236b/11640708/d229bc847a37/diagnostics-14-02675-g013.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/236b/11640708/d2d92b916d07/diagnostics-14-02675-g014.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/236b/11640708/ce2a8752bd49/diagnostics-14-02675-g015.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/236b/11640708/4350ab7d6386/diagnostics-14-02675-g016.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/236b/11640708/51a37a97d825/diagnostics-14-02675-g017.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/236b/11640708/ed9fb6bd6a62/diagnostics-14-02675-g018.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/236b/11640708/93cd04bae954/diagnostics-14-02675-g019.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/236b/11640708/5912fdf24b69/diagnostics-14-02675-g020.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/236b/11640708/b1538574c66d/diagnostics-14-02675-g021.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/236b/11640708/c88ff7ab1138/diagnostics-14-02675-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/236b/11640708/7b60e6a5694f/diagnostics-14-02675-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/236b/11640708/9e28a6bd657a/diagnostics-14-02675-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/236b/11640708/c34ce5e08633/diagnostics-14-02675-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/236b/11640708/a0efa82d4c96/diagnostics-14-02675-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/236b/11640708/9a4d02628a1b/diagnostics-14-02675-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/236b/11640708/d323f76d828a/diagnostics-14-02675-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/236b/11640708/5f50e38a4e49/diagnostics-14-02675-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/236b/11640708/c6b5c3685535/diagnostics-14-02675-g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/236b/11640708/a9644d801147/diagnostics-14-02675-g010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/236b/11640708/3d2e7f35c43a/diagnostics-14-02675-g011.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/236b/11640708/ede836962f44/diagnostics-14-02675-g012.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/236b/11640708/d229bc847a37/diagnostics-14-02675-g013.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/236b/11640708/d2d92b916d07/diagnostics-14-02675-g014.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/236b/11640708/ce2a8752bd49/diagnostics-14-02675-g015.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/236b/11640708/4350ab7d6386/diagnostics-14-02675-g016.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/236b/11640708/51a37a97d825/diagnostics-14-02675-g017.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/236b/11640708/ed9fb6bd6a62/diagnostics-14-02675-g018.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/236b/11640708/93cd04bae954/diagnostics-14-02675-g019.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/236b/11640708/5912fdf24b69/diagnostics-14-02675-g020.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/236b/11640708/b1538574c66d/diagnostics-14-02675-g021.jpg

相似文献

1
Mitigating Algorithmic Bias in AI-Driven Cardiovascular Imaging for Fairer Diagnostics.减轻人工智能驱动的心血管成像中的算法偏差以实现更公平的诊断。
Diagnostics (Basel). 2024 Nov 27;14(23):2675. doi: 10.3390/diagnostics14232675.
2
A scoping review of fair machine learning techniques when using real-world data.使用真实世界数据时公平机器学习技术的范围综述。
J Biomed Inform. 2024 Mar;151:104622. doi: 10.1016/j.jbi.2024.104622. Epub 2024 Mar 6.
3
Unmasking bias in artificial intelligence: a systematic review of bias detection and mitigation strategies in electronic health record-based models.揭开人工智能中的偏见:基于电子健康记录模型的偏见检测和缓解策略的系统评价。
J Am Med Inform Assoc. 2024 Apr 19;31(5):1172-1183. doi: 10.1093/jamia/ocae060.
4
Unmasking bias in artificial intelligence: a systematic review of bias detection and mitigation strategies in electronic health record-based models.揭示人工智能中的偏见:基于电子健康记录模型的偏见检测与缓解策略的系统评价
ArXiv. 2024 Jul 1:arXiv:2310.19917v3.
5
Evaluating Algorithmic Bias in 30-Day Hospital Readmission Models: Retrospective Analysis.评估 30 天内医院再入院模型中的算法偏差:回顾性分析。
J Med Internet Res. 2024 Apr 18;26:e47125. doi: 10.2196/47125.
6
D-BIAS: A Causality-Based Human-in-the-Loop System for Tackling Algorithmic Bias.D-BIAS:一种基于因果关系的人在回路系统,用于解决算法偏差。
IEEE Trans Vis Comput Graph. 2023 Jan;29(1):473-482. doi: 10.1109/TVCG.2022.3209484. Epub 2022 Dec 16.
7
Improving Fairness in AI Models on Electronic Health Records: The Case for Federated Learning Methods.提高电子健康记录人工智能模型的公平性:联邦学习方法的案例
FAccT 23 (2023). 2023 Jun;2023:1599-1608. doi: 10.1145/3593013.3594102. Epub 2023 Jun 12.
8
Evaluation and Mitigation of Racial Bias in Clinical Machine Learning Models: Scoping Review.临床机器学习模型中种族偏见的评估与缓解:范围综述
JMIR Med Inform. 2022 May 31;10(5):e36388. doi: 10.2196/36388.
9
Artificial intelligence in hospital infection prevention: an integrative review.医院感染预防中的人工智能:一项综合综述。
Front Public Health. 2025 Apr 2;13:1547450. doi: 10.3389/fpubh.2025.1547450. eCollection 2025.
10
A survey of recent methods for addressing AI fairness and bias in biomedicine.生物医学中解决人工智能公平性和偏见问题的最新方法综述。
J Biomed Inform. 2024 Jun;154:104646. doi: 10.1016/j.jbi.2024.104646. Epub 2024 Apr 25.

引用本文的文献

1
Bias in predictive models for vitreoretinal diseases: ethnic and socioeconomic disparities in artificial intelligence.玻璃体视网膜疾病预测模型中的偏差:人工智能中的种族和社会经济差异
Eye (Lond). 2025 Sep 9. doi: 10.1038/s41433-025-03990-0.
2
Artificial intelligence in mental health: integrating opportunities and challenges of multimodal deep learning for mental disorder prevention and treatment.心理健康领域的人工智能:整合多模态深度学习在精神障碍预防与治疗中的机遇与挑战
Ann Med Surg (Lond). 2025 Jul 22;87(9):5757-5761. doi: 10.1097/MS9.0000000000003624. eCollection 2025 Sep.
3
Artificial Intelligence and ECG: A New Frontier in Cardiac Diagnostics and Prevention.

本文引用的文献

1
FAIM: Fairness-aware interpretable modeling for trustworthy machine learning in healthcare.FAIM:用于医疗保健领域可信机器学习的公平感知可解释建模。
Patterns (N Y). 2024 Sep 12;5(10):101059. doi: 10.1016/j.patter.2024.101059. eCollection 2024 Oct 11.
2
Trustworthy and ethical AI-enabled cardiovascular care: a rapid review.可信且合乎道德的人工智能赋能心血管护理:快速综述。
BMC Med Inform Decis Mak. 2024 Sep 4;24(1):247. doi: 10.1186/s12911-024-02653-6.
3
Advancing Fairness in Cardiac Care: Strategies for Mitigating Bias in Artificial Intelligence Models Within Cardiology.
人工智能与心电图:心脏诊断与预防的新前沿。
Biomedicines. 2025 Jul 9;13(7):1685. doi: 10.3390/biomedicines13071685.
4
Serum anion gap and its interaction with diabetes in predicting mortality among critically ill patients with non-traumatic intracerebral hemorrhage.血清阴离子间隙及其与糖尿病的相互作用在预测非创伤性脑出血重症患者死亡率中的作用
Eur J Med Res. 2025 Jul 2;30(1):546. doi: 10.1186/s40001-025-02810-1.
5
Explainable Artificial Intelligence in Radiological Cardiovascular Imaging-A Systematic Review.放射心血管成像中的可解释人工智能——一项系统综述
Diagnostics (Basel). 2025 May 31;15(11):1399. doi: 10.3390/diagnostics15111399.
推进心脏护理公平:心脏学中减轻人工智能模型偏差的策略。
Can J Cardiol. 2024 Oct;40(10):1907-1921. doi: 10.1016/j.cjca.2024.04.026. Epub 2024 May 11.
4
Unmasking bias in artificial intelligence: a systematic review of bias detection and mitigation strategies in electronic health record-based models.揭开人工智能中的偏见:基于电子健康记录模型的偏见检测和缓解策略的系统评价。
J Am Med Inform Assoc. 2024 Apr 19;31(5):1172-1183. doi: 10.1093/jamia/ocae060.
5
Algorithmic fairness in cardiovascular disease risk prediction: overcoming inequalities.心血管疾病风险预测中的算法公平性:克服不平等。
Open Heart. 2023 Nov;10(2). doi: 10.1136/openhrt-2023-002395.
6
Development of a Machine Learning-Based Prescriptive Tool to Address Racial Disparities in Access to Care After Penetrating Trauma.基于机器学习的决策工具的开发,以解决穿透性创伤后获得护理方面的种族差异问题。
JAMA Surg. 2023 Oct 1;158(10):1088-1095. doi: 10.1001/jamasurg.2023.2293.
7
Machine learning in predicting outcomes for stroke patients following rehabilitation treatment: A systematic review.机器学习在预测康复治疗后中风患者结局中的应用:系统评价。
PLoS One. 2023 Jun 28;18(6):e0287308. doi: 10.1371/journal.pone.0287308. eCollection 2023.
8
ResNet and its application to medical image processing: Research progress and challenges.ResNet 及其在医学图像处理中的应用:研究进展与挑战。
Comput Methods Programs Biomed. 2023 Oct;240:107660. doi: 10.1016/j.cmpb.2023.107660. Epub 2023 Jun 8.
9
Machine learning for diagnosis of myocardial infarction using cardiac troponin concentrations.机器学习在心肌梗死诊断中心肌肌钙蛋白浓度的应用。
Nat Med. 2023 May;29(5):1201-1210. doi: 10.1038/s41591-023-02325-4. Epub 2023 May 11.
10
Cardiovascular diseases prediction by machine learning incorporation with deep learning.结合深度学习的机器学习用于心血管疾病预测
Front Med (Lausanne). 2023 Apr 17;10:1150933. doi: 10.3389/fmed.2023.1150933. eCollection 2023.