• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

XAI-TRIS:用于量化特征重要性的事后误报归因的非线性图像基准。

XAI-TRIS: non-linear image benchmarks to quantify false positive post-hoc attribution of feature importance.

作者信息

Clark Benedict, Wilming Rick, Haufe Stefan

机构信息

Physikalisch-Technische Bundesanstalt, Abbestr. 2-12, 10587 Berlin, Germany.

Technische Universität Berlin, Str. des 17. Juni 135, 10623 Berlin, Germany.

出版信息

Mach Learn. 2024;113(9):6871-6910. doi: 10.1007/s10994-024-06574-3. Epub 2024 Jul 16.

DOI:10.1007/s10994-024-06574-3
PMID:39132312
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC11306297/
Abstract

The field of 'explainable' artificial intelligence (XAI) has produced highly acclaimed methods that seek to make the decisions of complex machine learning (ML) methods 'understandable' to humans, for example by attributing 'importance' scores to input features. Yet, a lack of formal underpinning leaves it unclear as to what conclusions can safely be drawn from the results of a given XAI method and has also so far hindered the theoretical verification and empirical validation of XAI methods. This means that challenging non-linear problems, typically solved by deep neural networks, presently lack appropriate remedies. Here, we craft benchmark datasets for one linear and three different non-linear classification scenarios, in which the important class-conditional features are known by design, serving as ground truth explanations. Using novel quantitative metrics, we benchmark the explanation performance of a wide set of XAI methods across three deep learning model architectures. We show that popular XAI methods are often unable to significantly outperform random performance baselines and edge detection methods, attributing false-positive importance to features with no statistical relationship to the prediction target rather than truly important features. Moreover, we demonstrate that explanations derived from different model architectures can be vastly different; thus, prone to misinterpretation even under controlled conditions.

摘要

“可解释的”人工智能(XAI)领域已经产生了广受赞誉的方法,这些方法试图让复杂的机器学习(ML)方法的决策对人类来说“可理解”,例如通过为输入特征赋予“重要性”分数。然而,缺乏形式化的支撑使得我们不清楚从给定的XAI方法的结果中可以安全地得出哪些结论,并且到目前为止也阻碍了XAI方法的理论验证和实证验证。这意味着通常由深度神经网络解决的具有挑战性的非线性问题目前缺乏合适的解决方法。在这里,我们为一种线性和三种不同的非线性分类场景构建了基准数据集,其中重要的类条件特征是通过设计已知的,作为基本事实解释。使用新颖的定量指标,我们在三种深度学习模型架构上对大量XAI方法的解释性能进行了基准测试。我们表明,流行的XAI方法通常无法显著优于随机性能基线和边缘检测方法,将误报重要性归因于与预测目标没有统计关系的特征,而不是真正重要的特征。此外,我们证明了从不同模型架构得出的解释可能有很大差异;因此,即使在受控条件下也容易产生误解。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f5a2/11306297/d9bed5fd9ed1/10994_2024_6574_Fig18_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f5a2/11306297/079566330c0c/10994_2024_6574_Fig1_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f5a2/11306297/9eb0eef506bf/10994_2024_6574_Fig2_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f5a2/11306297/4181bc901a0b/10994_2024_6574_Fig3_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f5a2/11306297/cf83268b176e/10994_2024_6574_Fig4_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f5a2/11306297/648bdd29411b/10994_2024_6574_Fig5_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f5a2/11306297/480ab831f709/10994_2024_6574_Fig6_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f5a2/11306297/3b3277227e33/10994_2024_6574_Fig7_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f5a2/11306297/fc198db43858/10994_2024_6574_Fig8_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f5a2/11306297/18b911fb8de8/10994_2024_6574_Fig9_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f5a2/11306297/fb917e0e119c/10994_2024_6574_Fig10_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f5a2/11306297/491cbe4848ab/10994_2024_6574_Fig11_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f5a2/11306297/2fb99e52615f/10994_2024_6574_Fig12_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f5a2/11306297/3793d030fbf1/10994_2024_6574_Fig13_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f5a2/11306297/963ae70524c6/10994_2024_6574_Fig14_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f5a2/11306297/48f8efb31894/10994_2024_6574_Fig15_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f5a2/11306297/e561fdef2371/10994_2024_6574_Fig16_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f5a2/11306297/512f4e590107/10994_2024_6574_Fig17_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f5a2/11306297/d9bed5fd9ed1/10994_2024_6574_Fig18_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f5a2/11306297/079566330c0c/10994_2024_6574_Fig1_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f5a2/11306297/9eb0eef506bf/10994_2024_6574_Fig2_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f5a2/11306297/4181bc901a0b/10994_2024_6574_Fig3_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f5a2/11306297/cf83268b176e/10994_2024_6574_Fig4_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f5a2/11306297/648bdd29411b/10994_2024_6574_Fig5_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f5a2/11306297/480ab831f709/10994_2024_6574_Fig6_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f5a2/11306297/3b3277227e33/10994_2024_6574_Fig7_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f5a2/11306297/fc198db43858/10994_2024_6574_Fig8_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f5a2/11306297/18b911fb8de8/10994_2024_6574_Fig9_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f5a2/11306297/fb917e0e119c/10994_2024_6574_Fig10_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f5a2/11306297/491cbe4848ab/10994_2024_6574_Fig11_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f5a2/11306297/2fb99e52615f/10994_2024_6574_Fig12_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f5a2/11306297/3793d030fbf1/10994_2024_6574_Fig13_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f5a2/11306297/963ae70524c6/10994_2024_6574_Fig14_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f5a2/11306297/48f8efb31894/10994_2024_6574_Fig15_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f5a2/11306297/e561fdef2371/10994_2024_6574_Fig16_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f5a2/11306297/512f4e590107/10994_2024_6574_Fig17_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f5a2/11306297/d9bed5fd9ed1/10994_2024_6574_Fig18_HTML.jpg

相似文献

1
XAI-TRIS: non-linear image benchmarks to quantify false positive post-hoc attribution of feature importance.XAI-TRIS:用于量化特征重要性的事后误报归因的非线性图像基准。
Mach Learn. 2024;113(9):6871-6910. doi: 10.1007/s10994-024-06574-3. Epub 2024 Jul 16.
2
Scrutinizing XAI using linear ground-truth data with suppressor variables.使用带有抑制变量的线性真实数据来审视可解释人工智能。
Mach Learn. 2022;111(5):1903-1923. doi: 10.1007/s10994-022-06167-y. Epub 2022 Apr 13.
3
Clinical domain knowledge-derived template improves post hoc AI explanations in pneumothorax classification.临床领域知识衍生模板可提高气胸分类事后人工智能解释的质量。
J Biomed Inform. 2024 Aug;156:104673. doi: 10.1016/j.jbi.2024.104673. Epub 2024 Jun 9.
4
Benchmarking the influence of pre-training on explanation performance in MR image classification.在磁共振图像分类中评估预训练对解释性能的影响。
Front Artif Intell. 2024 Feb 26;7:1330919. doi: 10.3389/frai.2024.1330919. eCollection 2024.
5
Toward explainable AI-empowered cognitive health assessment.迈向可解释人工智能赋能的认知健康评估。
Front Public Health. 2023 Mar 9;11:1024195. doi: 10.3389/fpubh.2023.1024195. eCollection 2023.
6
To trust or not to trust an explanation: using LEAF to evaluate local linear XAI methods.是否信任一种解释:使用LEAF评估局部线性可解释人工智能方法。
PeerJ Comput Sci. 2021 Apr 16;7:e479. doi: 10.7717/peerj-cs.479. eCollection 2021.
7
Toward explainable AI (XAI) for mental health detection based on language behavior.迈向基于语言行为的可解释人工智能(XAI)用于心理健康检测。
Front Psychiatry. 2023 Dec 7;14:1219479. doi: 10.3389/fpsyt.2023.1219479. eCollection 2023.
8
Survey of explainable artificial intelligence techniques for biomedical imaging with deep neural networks.基于深度神经网络的生物医学成像可解释人工智能技术综述。
Comput Biol Med. 2023 Apr;156:106668. doi: 10.1016/j.compbiomed.2023.106668. Epub 2023 Feb 18.
9
DeepXplainer: An interpretable deep learning based approach for lung cancer detection using explainable artificial intelligence.深演析:一种基于可解释人工智能的用于肺癌检测的可解释深度学习方法。
Comput Methods Programs Biomed. 2024 Jan;243:107879. doi: 10.1016/j.cmpb.2023.107879. Epub 2023 Oct 24.
10
A Survey on Medical Explainable AI (XAI): Recent Progress, Explainability Approach, Human Interaction and Scoring System.医学可解释人工智能(XAI)调查:最新进展、可解释性方法、人机交互和评分系统。
Sensors (Basel). 2022 Oct 21;22(20):8068. doi: 10.3390/s22208068.

引用本文的文献

1
Benchmarking the influence of pre-training on explanation performance in MR image classification.在磁共振图像分类中评估预训练对解释性能的影响。
Front Artif Intell. 2024 Feb 26;7:1330919. doi: 10.3389/frai.2024.1330919. eCollection 2024.

本文引用的文献

1
Benchmarking the influence of pre-training on explanation performance in MR image classification.在磁共振图像分类中评估预训练对解释性能的影响。
Front Artif Intell. 2024 Feb 26;7:1330919. doi: 10.3389/frai.2024.1330919. eCollection 2024.
2
Scrutinizing XAI using linear ground-truth data with suppressor variables.使用带有抑制变量的线性真实数据来审视可解释人工智能。
Mach Learn. 2022;111(5):1903-1923. doi: 10.1007/s10994-022-06167-y. Epub 2022 Apr 13.
3
All Models are Wrong, but are Useful: Learning a Variable's Importance by Studying an Entire Class of Prediction Models Simultaneously.
所有模型都是有缺陷的,但都是有用的:通过同时研究一整个类别的预测模型来了解变量的重要性。
J Mach Learn Res. 2019;20.
4
On Pixel-Wise Explanations for Non-Linear Classifier Decisions by Layer-Wise Relevance Propagation.关于通过逐层相关性传播对非线性分类器决策进行逐像素解释
PLoS One. 2015 Jul 10;10(7):e0130140. doi: 10.1371/journal.pone.0130140. eCollection 2015.
5
On the interpretation of weight vectors of linear models in multivariate neuroimaging.线性模型在多变量神经影像学中的权重向量解释。
Neuroimage. 2014 Feb 15;87:96-110. doi: 10.1016/j.neuroimage.2013.10.067. Epub 2013 Nov 15.