Suppr超能文献

CRAFT:用于可解释性的概念递归激活因子分解

CRAFT: Concept Recursive Activation FacTorization for Explainability.

作者信息

Fel Thomas, Picard Agustin, Bethune Louis, Boissin Thibaut, Vigouroux David, Colin Julien, Cadène Rémi, Serre Thomas

机构信息

Carney Institute for Brain Science, Brown University, USA.

Artificial and Natural Intelligence Toulouse Institute, Université de Toulouse, France.

出版信息

Proc IEEE Comput Soc Conf Comput Vis Pattern Recognit. 2023 Jun;2023:2711-2721. doi: 10.1109/cvpr52729.2023.00266. Epub 2023 Aug 22.

Abstract

Attribution methods, which employ heatmaps to identify the most influential regions of an image that impact model decisions, have gained widespread popularity as a type of explainability method. However, recent research has exposed the limited practical value of these methods, attributed in part to their narrow focus on the most prominent regions of an image - revealing "where" the model looks, but failing to elucidate "what" the model sees in those areas. In this work, we try to fill in this gap with CRAFT - a novel approach to identify both "what" and "where" by generating concept-based explanations. We introduce 3 new ingredients to the automatic concept extraction literature: () a recursive strategy to detect and decompose concepts across layers, () a novel method for a more faithful estimation of concept importance using Sobol indices, and () the use of implicit differentiation to unlock Concept Attribution Maps. We conduct both human and computer vision experiments to demonstrate the benefits of the proposed approach. We show that the proposed concept importance estimation technique is more faithful to the model than previous methods. When evaluating the usefulness of the method for human experimenters on a human-centered utility benchmark, we find that our approach significantly improves on two of the three test scenarios. Our code is freely available: github.com/deel-ai/Craft.

摘要

归因方法利用热图来识别影响模型决策的图像中最具影响力的区域,作为一种可解释性方法已广受欢迎。然而,最近的研究揭示了这些方法的实际价值有限,部分原因在于它们过于关注图像中最突出的区域——揭示了模型“看哪里”,但未能阐明模型在这些区域“看到了什么”。在这项工作中,我们试图用CRAFT填补这一空白——一种通过生成基于概念的解释来识别“什么”和“哪里”的新方法。我们为自动概念提取文献引入了3个新要素:()一种跨层检测和分解概念的递归策略,()一种使用索博尔指数更准确估计概念重要性的新方法,以及()使用隐式微分来解锁概念归因图。我们进行了人类和计算机视觉实验,以证明所提出方法的好处。我们表明,所提出的概念重要性估计技术比以前的方法更符合模型。在以人类为中心的效用基准上评估该方法对人类实验者的有用性时,我们发现我们的方法在三个测试场景中的两个上有显著改进。我们的代码可在github.com/deel-ai/Craft上免费获取。

相似文献

1
CRAFT: Concept Recursive Activation FacTorization for Explainability.CRAFT:用于可解释性的概念递归激活因子分解
Proc IEEE Comput Soc Conf Comput Vis Pattern Recognit. 2023 Jun;2023:2711-2721. doi: 10.1109/cvpr52729.2023.00266. Epub 2023 Aug 22.
9
Concept attribution: Explaining CNN decisions to physicians.概念归因:向医生解释卷积神经网络的决策
Comput Biol Med. 2020 Aug;123:103865. doi: 10.1016/j.compbiomed.2020.103865. Epub 2020 Jun 17.

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验