Yang Lei, Lu Lingmeng, Liu Chao, Zhang Jian, Guo Kehua, Zhang Ning, Zhou Fangfang, Zhao Ying
School of Computer Science and Engineering, Central South University, Changsha, 410083, China.
Institute of Systems Engineering Academy of Military Sciences, People's Liberation Army, Beijing, 100091, China.
Sci Rep. 2025 Mar 18;15(1):9261. doi: 10.1038/s41598-025-94052-8.
Convolutional neural network (CNN) has been widely used in image classification tasks. Neuron feature visualization techniques can generate intuitive images to depict features extracted by neurons, helping users to interpret the working mechanism of a CNN. However, a CNN model commonly has numerous neurons. Manually reviewing all neurons' feature visualizations is exhaustive, thereby causing low efficiency in CNN interpretability exploration. Inspired by SHapley Additive exPlanation (SHAP) method in Coalitional Game Theory, a quantified metric called Neuron Interpretive Metric (NeuronIM) is proposed to assess the feature expression ability of a neuron feature visualization by calculating the similarity between the feature visualization and SHAP image of the neuron. Thus, users can rapidly identify important neurons in CNN interpretability exploration. A metric called layer interpretive metric (LayerIM) and two interactive interfaces are proposed based on NeuronIM and LayerIM. The LayerIM can assess the interpretability of a convolution layer by averaging the NeuronIM values of all neurons in the layer. The interactive interfaces can display diverse explanatory information in multiple views and provide users with rich interactions to efficiently accomplish interpretability exploration tasks. A model pruning experiment and use cases were conducted to demonstrate the effectiveness of the proposed metrics and interfaces.
卷积神经网络(CNN)已广泛应用于图像分类任务。神经元特征可视化技术可以生成直观的图像来描绘神经元提取的特征,帮助用户解释CNN的工作机制。然而,一个CNN模型通常有大量的神经元。手动查看所有神经元的特征可视化是一项详尽的工作,从而导致在CNN可解释性探索中效率低下。受合作博弈论中的SHapley加性解释(SHAP)方法的启发,提出了一种名为神经元解释度量(NeuronIM)的量化指标,通过计算神经元特征可视化与神经元的SHAP图像之间的相似度来评估神经元特征可视化的特征表达能力。因此,用户可以在CNN可解释性探索中快速识别重要的神经元。基于NeuronIM和LayerIM,提出了一种名为层解释度量(LayerIM)的指标和两个交互式界面。LayerIM可以通过平均层中所有神经元的NeuronIM值来评估卷积层的可解释性。交互式界面可以在多个视图中显示各种解释信息,并为用户提供丰富的交互,以有效地完成可解释性探索任务。进行了模型剪枝实验和用例,以证明所提出的指标和界面的有效性。