Suppr超能文献

CRAFT:用于可解释性的概念递归激活因子分解

CRAFT: Concept Recursive Activation FacTorization for Explainability.

作者信息

Fel Thomas, Picard Agustin, Bethune Louis, Boissin Thibaut, Vigouroux David, Colin Julien, Cadène Rémi, Serre Thomas

机构信息

Carney Institute for Brain Science, Brown University, USA.

Artificial and Natural Intelligence Toulouse Institute, Université de Toulouse, France.

出版信息

Proc IEEE Comput Soc Conf Comput Vis Pattern Recognit. 2023 Jun;2023:2711-2721. doi: 10.1109/cvpr52729.2023.00266. Epub 2023 Aug 22.

Abstract

Attribution methods, which employ heatmaps to identify the most influential regions of an image that impact model decisions, have gained widespread popularity as a type of explainability method. However, recent research has exposed the limited practical value of these methods, attributed in part to their narrow focus on the most prominent regions of an image - revealing "where" the model looks, but failing to elucidate "what" the model sees in those areas. In this work, we try to fill in this gap with CRAFT - a novel approach to identify both "what" and "where" by generating concept-based explanations. We introduce 3 new ingredients to the automatic concept extraction literature: () a recursive strategy to detect and decompose concepts across layers, () a novel method for a more faithful estimation of concept importance using Sobol indices, and () the use of implicit differentiation to unlock Concept Attribution Maps. We conduct both human and computer vision experiments to demonstrate the benefits of the proposed approach. We show that the proposed concept importance estimation technique is more faithful to the model than previous methods. When evaluating the usefulness of the method for human experimenters on a human-centered utility benchmark, we find that our approach significantly improves on two of the three test scenarios. Our code is freely available: github.com/deel-ai/Craft.

摘要

归因方法利用热图来识别影响模型决策的图像中最具影响力的区域,作为一种可解释性方法已广受欢迎。然而,最近的研究揭示了这些方法的实际价值有限,部分原因在于它们过于关注图像中最突出的区域——揭示了模型“看哪里”,但未能阐明模型在这些区域“看到了什么”。在这项工作中,我们试图用CRAFT填补这一空白——一种通过生成基于概念的解释来识别“什么”和“哪里”的新方法。我们为自动概念提取文献引入了3个新要素:()一种跨层检测和分解概念的递归策略,()一种使用索博尔指数更准确估计概念重要性的新方法,以及()使用隐式微分来解锁概念归因图。我们进行了人类和计算机视觉实验,以证明所提出方法的好处。我们表明,所提出的概念重要性估计技术比以前的方法更符合模型。在以人类为中心的效用基准上评估该方法对人类实验者的有用性时,我们发现我们的方法在三个测试场景中的两个上有显著改进。我们的代码可在github.com/deel-ai/Craft上免费获取。

相似文献

1
CRAFT: Concept Recursive Activation FacTorization for Explainability.
Proc IEEE Comput Soc Conf Comput Vis Pattern Recognit. 2023 Jun;2023:2711-2721. doi: 10.1109/cvpr52729.2023.00266. Epub 2023 Aug 22.
3
R-Cut: Enhancing Explainability in Vision Transformers with Relationship Weighted Out and Cut.
Sensors (Basel). 2024 Apr 24;24(9):2695. doi: 10.3390/s24092695.
4
Macromolecular crowding: chemistry and physics meet biology (Ascona, Switzerland, 10-14 June 2012).
Phys Biol. 2013 Aug;10(4):040301. doi: 10.1088/1478-3975/10/4/040301. Epub 2013 Aug 2.
6
ExAID: A multimodal explanation framework for computer-aided diagnosis of skin lesions.
Comput Methods Programs Biomed. 2022 Mar;215:106620. doi: 10.1016/j.cmpb.2022.106620. Epub 2022 Jan 5.
8
Validating Automatic Concept-Based Explanations for AI-Based Digital Histopathology.
Sensors (Basel). 2022 Jul 18;22(14):5346. doi: 10.3390/s22145346.
9
Concept attribution: Explaining CNN decisions to physicians.
Comput Biol Med. 2020 Aug;123:103865. doi: 10.1016/j.compbiomed.2020.103865. Epub 2020 Jun 17.
10

引用本文的文献

本文引用的文献

2
Prevalence of neural collapse during the terminal phase of deep learning training.
Proc Natl Acad Sci U S A. 2020 Oct 6;117(40):24652-24663. doi: 10.1073/pnas.2015509117. Epub 2020 Sep 21.
3
Computer vision cracks the leaf code.
Proc Natl Acad Sci U S A. 2016 Mar 22;113(12):3305-10. doi: 10.1073/pnas.1524473113. Epub 2016 Mar 7.
4
Learning the parts of objects by non-negative matrix factorization.
Nature. 1999 Oct 21;401(6755):788-91. doi: 10.1038/44565.

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验