Niu Yingjie, Ding Ming, Ge Maoning, Karlsson Robin, Zhang Yuxiao, Carballo Alexander, Takeda Kazuya
Graduate School of Informatics, Nagoya University, Nagoya 464-8603, Japan.
Graduate School of Engineering, Gifu University, Gifu 501-1112, Japan.
Sensors (Basel). 2024 Apr 24;24(9):2695. doi: 10.3390/s24092695.
Transformer-based models have gained popularity in the field of natural language processing (NLP) and are extensively utilized in computer vision tasks and multi-modal models such as GPT4. This paper presents a novel method to enhance the explainability of transformer-based image classification models. Our method aims to improve trust in classification results and empower users to gain a deeper understanding of the model for downstream tasks by providing visualizations of class-specific maps. We introduce two modules: the "Relationship Weighted Out" and the "Cut" modules. The "Relationship Weighted Out" module focuses on extracting class-specific information from intermediate layers, enabling us to highlight relevant features. Additionally, the "Cut" module performs fine-grained feature decomposition, taking into account factors such as position, texture, and color. By integrating these modules, we generate dense class-specific visual explainability maps. We validate our method with extensive qualitative and quantitative experiments on the ImageNet dataset. Furthermore, we conduct a large number of experiments on the LRN dataset, which is specifically designed for automatic driving danger alerts, to evaluate the explainability of our method in scenarios with complex backgrounds. The results demonstrate a significant improvement over previous methods. Moreover, we conduct ablation experiments to validate the effectiveness of each module. Through these experiments, we are able to confirm the respective contributions of each module, thus solidifying the overall effectiveness of our proposed approach.
基于Transformer的模型在自然语言处理(NLP)领域中颇受欢迎,并广泛应用于计算机视觉任务以及诸如GPT4之类的多模态模型。本文提出了一种新颖的方法来增强基于Transformer的图像分类模型的可解释性。我们的方法旨在通过提供特定类别的映射可视化,提高对分类结果的信任度,并使用户能够更深入地理解模型以用于下游任务。我们引入了两个模块:“关系加权输出”模块和“切割”模块。“关系加权输出”模块专注于从中间层提取特定类别的信息,使我们能够突出相关特征。此外,“切割”模块进行细粒度的特征分解,同时考虑位置、纹理和颜色等因素。通过整合这些模块,我们生成了密集的特定类别的视觉可解释性映射。我们在ImageNet数据集上进行了广泛的定性和定量实验来验证我们的方法。此外,我们在专门为自动驾驶危险警报设计的LRN数据集上进行了大量实验,以评估我们的方法在复杂背景场景下的可解释性。结果表明,与先前的方法相比有显著改进。此外,我们进行了消融实验以验证每个模块的有效性。通过这些实验,我们能够确认每个模块的各自贡献,从而巩固了我们提出的方法的整体有效性。