• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

刻画 XAI 方法中相关特征的贡献。

Characterizing the Contribution of Dependent Features in XAI Methods.

出版信息

IEEE J Biomed Health Inform. 2024 Nov;28(11):6466-6473. doi: 10.1109/JBHI.2024.3395289. Epub 2024 Nov 6.

DOI:10.1109/JBHI.2024.3395289
PMID:38696291
Abstract

Explainable Artificial Intelligence (XAI) provides tools to help understanding how AI models work and reach a particular decision or outcome. It helps to increase the interpretability of models and makes them more trustworthy and transparent. In this context, many XAI methods have been proposed to make black-box and complex models more digestible from a human perspective. However, one of the main issues that XAI methods have to face especially when dealing with a high number of features is the presence of multicollinearity, which casts shadows on the robustness of the XAI outcomes, such as the ranking of informative features. Most of the current XAI methods either do not consider the collinearity or assume the features are independent which, in general, is not necessarily true. Here, we propose a simple, yet useful, proxy that modifies the outcome of any XAI feature ranking method allowing to account for the dependency among the features, and to reveal their impact on the outcome. The proposed method was applied to SHAP, as an example of XAI method which assume that the features are independent. For this purpose, several models were exploited for a well-known classification task (males versus females) using nine cardiac phenotypes extracted from cardiac magnetic resonance imaging as features. Principal component analysis and biological plausibility were employed to validate the proposed method. Our results showed that the proposed proxy could lead to a more robust list of informative features compared to the original SHAP in presence of collinearity.

摘要

可解释人工智能 (XAI) 提供了工具,帮助人们了解 AI 模型的工作方式以及得出特定决策或结果的原因。它有助于提高模型的可解释性,使其更值得信赖和透明。在这种情况下,已经提出了许多 XAI 方法,以使黑盒和复杂的模型从人类的角度更容易理解。然而,XAI 方法必须面对的一个主要问题是,特别是在处理大量特征时,存在多重共线性,这会影响 XAI 结果的稳健性,例如信息特征的排名。目前大多数 XAI 方法要么不考虑共线性,要么假设特征是独立的,但通常情况下,这并不一定正确。在这里,我们提出了一种简单而有用的代理,它可以修改任何 XAI 特征排名方法的结果,以考虑特征之间的依赖性,并揭示它们对结果的影响。所提出的方法应用于 SHAP,作为一种假设特征独立的 XAI 方法的示例。为此,使用从心脏磁共振成像中提取的九个心脏表型作为特征,利用几个模型来完成一个众所周知的分类任务(男性与女性)。主成分分析和生物学合理性被用来验证所提出的方法。我们的结果表明,在所提出的代理方法中,在存在共线性的情况下,与原始的 SHAP 相比,它可以得出更稳健的信息特征列表。

相似文献

1
Characterizing the Contribution of Dependent Features in XAI Methods.刻画 XAI 方法中相关特征的贡献。
IEEE J Biomed Health Inform. 2024 Nov;28(11):6466-6473. doi: 10.1109/JBHI.2024.3395289. Epub 2024 Nov 6.
2
Model-agnostic explainable artificial intelligence tools for severity prediction and symptom analysis on Indian COVID-19 data.用于印度新冠疫情数据严重程度预测和症状分析的模型无关可解释人工智能工具。
Front Artif Intell. 2023 Dec 4;6:1272506. doi: 10.3389/frai.2023.1272506. eCollection 2023.
3
Human attention guided explainable artificial intelligence for computer vision models.人类注意力引导的计算机视觉模型可解释人工智能。
Neural Netw. 2024 Sep;177:106392. doi: 10.1016/j.neunet.2024.106392. Epub 2024 May 15.
4
Current methods in explainable artificial intelligence and future prospects for integrative physiology.可解释人工智能的当前方法与整合生理学的未来前景。
Pflugers Arch. 2025 Apr;477(4):513-529. doi: 10.1007/s00424-025-03067-7. Epub 2025 Feb 25.
5
Do explainable AI (XAI) methods improve the acceptance of AI in clinical practice? An evaluation of XAI methods on Gleason grading.可解释人工智能(XAI)方法能否提高人工智能在临床实践中的接受度?对XAI方法在 Gleason分级中的评估。
J Pathol Clin Res. 2025 Mar;11(2):e70023. doi: 10.1002/2056-4538.70023.
6
Explainability and white box in drug discovery.药物发现中的可解释性和白盒。
Chem Biol Drug Des. 2023 Jul;102(1):217-233. doi: 10.1111/cbdd.14262. Epub 2023 Apr 27.
7
How Explainable Artificial Intelligence Can Increase or Decrease Clinicians' Trust in AI Applications in Health Care: Systematic Review.可解释人工智能如何增加或降低临床医生对医疗保健中人工智能应用的信任:系统评价
JMIR AI. 2024 Oct 30;3:e53207. doi: 10.2196/53207.
8
Explainable artificial intelligence for pharmacovigilance: What features are important when predicting adverse outcomes?可解释人工智能在药物警戒中的应用:在预测不良结局时,哪些特征是重要的?
Comput Methods Programs Biomed. 2021 Nov;212:106415. doi: 10.1016/j.cmpb.2021.106415. Epub 2021 Sep 26.
9
Explainable artificial intelligence in breast cancer detection and risk prediction: A systematic scoping review.乳腺癌检测与风险预测中的可解释人工智能:一项系统综述。
Cancer Innov. 2024 Jul 3;3(5):e136. doi: 10.1002/cai2.136. eCollection 2024 Oct.
10
BenchXAI: Comprehensive benchmarking of post-hoc explainable AI methods on multi-modal biomedical data.BenchXAI:多模态生物医学数据上事后可解释人工智能方法的综合基准测试
Comput Biol Med. 2025 Jun;191:110124. doi: 10.1016/j.compbiomed.2025.110124. Epub 2025 Apr 15.

引用本文的文献

1
Explainable AI-driven assessment of hydro climatic interactions shaping river discharge dynamics in a monsoonal basin.可解释人工智能驱动的对塑造季风流域河流流量动态的水文气候相互作用的评估。
Sci Rep. 2025 Jul 26;15(1):27302. doi: 10.1038/s41598-025-13221-x.
2
Prediction of cardiac remodeling and myocardial fibrosis in athletes based on IVIM-DWI images.基于体素内不相干运动扩散加权成像(IVIM-DWI)图像预测运动员的心脏重塑和心肌纤维化
iScience. 2024 Dec 11;28(1):111567. doi: 10.1016/j.isci.2024.111567. eCollection 2025 Jan 17.
3
XAI-Based Assessment of the AMURA Model for Detecting Amyloid-β and Tau Microstructural Signatures in Alzheimer's Disease.
基于 XAI 的 AMURA 模型评估在阿尔茨海默病中检测淀粉样-β和 Tau 微观结构特征的性能。
IEEE J Transl Eng Health Med. 2024 Jul 17;12:569-579. doi: 10.1109/JTEHM.2024.3430035. eCollection 2024.
4
A review of evaluation approaches for explainable AI with applications in cardiology.用于可解释人工智能并应用于心脏病学的评估方法综述。
Artif Intell Rev. 2024;57(9):240. doi: 10.1007/s10462-024-10852-w. Epub 2024 Aug 9.