DEPICT:用于图像分类任务的扩散增强排列重要性

DEPICT: Diffusion-Enabled Permutation Importance for Image Classification Tasks.

作者信息

Jabbour Sarah, Kondas Gregory, Kazerooni Ella, Sjoding Michael, Fouhey David, Wiens Jenna

机构信息

University of Michigan, Ann Arbor, MI, USA.

New York University, New York, NY, USA.

出版信息

Comput Vis ECCV. 2025;15122:35-51. doi: 10.1007/978-3-031-73039-9_3. Epub 2024 Oct 31.

Abstract

We propose a permutation-based explanation method for image classifiers. Current image-model explanations like activation maps are limited to instance-based explanations in the pixel space, making it difficult to understand global model behavior. In contrast, permutation based explanations for tabular data classifiers measure feature importance by comparing model performance on data before and after permuting a feature. We propose an explanation method for image-based models that permutes interpretable concepts across dataset images. Given a dataset of images labeled with specific concepts like captions, we permute a concept across examples in the text space and then generate images via a text-conditioned diffusion model. Feature importance is then reflected by the change in model performance relative to unpermuted data. When applied to a set of concepts, the method generates a ranking of feature importance. We show this approach recovers underlying model feature importance on synthetic and real-world image classification tasks.

摘要

我们提出了一种用于图像分类器的基于排列的解释方法。当前诸如激活图之类的图像模型解释仅限于像素空间中基于实例的解释,这使得理解全局模型行为变得困难。相比之下,用于表格数据分类器的基于排列的解释通过比较在排列一个特征之前和之后的数据上的模型性能来衡量特征重要性。我们提出了一种用于基于图像的模型的解释方法,该方法在数据集图像上排列可解释的概念。给定一个用诸如标题之类的特定概念标记的图像数据集,我们在文本空间中的示例之间排列一个概念,然后通过文本条件扩散模型生成图像。然后,特征重要性由相对于未排列数据的模型性能变化来反映。当应用于一组概念时,该方法会生成特征重要性的排名。我们表明这种方法在合成和真实世界的图像分类任务中恢复了潜在的模型特征重要性。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索