• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

通过合作博弈理论对卷积神经网络可解释性进行交互式探索。

Interactive exploration of CNN interpretability via coalitional game theory.

作者信息

Yang Lei, Lu Lingmeng, Liu Chao, Zhang Jian, Guo Kehua, Zhang Ning, Zhou Fangfang, Zhao Ying

机构信息

School of Computer Science and Engineering, Central South University, Changsha, 410083, China.

Institute of Systems Engineering Academy of Military Sciences, People's Liberation Army, Beijing, 100091, China.

出版信息

Sci Rep. 2025 Mar 18;15(1):9261. doi: 10.1038/s41598-025-94052-8.

DOI:10.1038/s41598-025-94052-8
PMID:40102523
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC11920214/
Abstract

Convolutional neural network (CNN) has been widely used in image classification tasks. Neuron feature visualization techniques can generate intuitive images to depict features extracted by neurons, helping users to interpret the working mechanism of a CNN. However, a CNN model commonly has numerous neurons. Manually reviewing all neurons' feature visualizations is exhaustive, thereby causing low efficiency in CNN interpretability exploration. Inspired by SHapley Additive exPlanation (SHAP) method in Coalitional Game Theory, a quantified metric called Neuron Interpretive Metric (NeuronIM) is proposed to assess the feature expression ability of a neuron feature visualization by calculating the similarity between the feature visualization and SHAP image of the neuron. Thus, users can rapidly identify important neurons in CNN interpretability exploration. A metric called layer interpretive metric (LayerIM) and two interactive interfaces are proposed based on NeuronIM and LayerIM. The LayerIM can assess the interpretability of a convolution layer by averaging the NeuronIM values of all neurons in the layer. The interactive interfaces can display diverse explanatory information in multiple views and provide users with rich interactions to efficiently accomplish interpretability exploration tasks. A model pruning experiment and use cases were conducted to demonstrate the effectiveness of the proposed metrics and interfaces.

摘要

卷积神经网络(CNN)已广泛应用于图像分类任务。神经元特征可视化技术可以生成直观的图像来描绘神经元提取的特征,帮助用户解释CNN的工作机制。然而,一个CNN模型通常有大量的神经元。手动查看所有神经元的特征可视化是一项详尽的工作,从而导致在CNN可解释性探索中效率低下。受合作博弈论中的SHapley加性解释(SHAP)方法的启发,提出了一种名为神经元解释度量(NeuronIM)的量化指标,通过计算神经元特征可视化与神经元的SHAP图像之间的相似度来评估神经元特征可视化的特征表达能力。因此,用户可以在CNN可解释性探索中快速识别重要的神经元。基于NeuronIM和LayerIM,提出了一种名为层解释度量(LayerIM)的指标和两个交互式界面。LayerIM可以通过平均层中所有神经元的NeuronIM值来评估卷积层的可解释性。交互式界面可以在多个视图中显示各种解释信息,并为用户提供丰富的交互,以有效地完成可解释性探索任务。进行了模型剪枝实验和用例,以证明所提出的指标和界面的有效性。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1709/11920214/019629e6a9c9/41598_2025_94052_Fig11_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1709/11920214/16d06f47fc13/41598_2025_94052_Fig1_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1709/11920214/f79a48e96025/41598_2025_94052_Fig2_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1709/11920214/88190da6edfb/41598_2025_94052_Fig3_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1709/11920214/2f898c63f101/41598_2025_94052_Fig4_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1709/11920214/e81dd06a216c/41598_2025_94052_Fig5_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1709/11920214/d7edc2a3bae9/41598_2025_94052_Fig6_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1709/11920214/184a6e3f75bc/41598_2025_94052_Fig7_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1709/11920214/a1ad2e2d8694/41598_2025_94052_Fig8_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1709/11920214/f447c71412b0/41598_2025_94052_Fig9_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1709/11920214/449fe4409f76/41598_2025_94052_Fig10_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1709/11920214/019629e6a9c9/41598_2025_94052_Fig11_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1709/11920214/16d06f47fc13/41598_2025_94052_Fig1_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1709/11920214/f79a48e96025/41598_2025_94052_Fig2_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1709/11920214/88190da6edfb/41598_2025_94052_Fig3_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1709/11920214/2f898c63f101/41598_2025_94052_Fig4_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1709/11920214/e81dd06a216c/41598_2025_94052_Fig5_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1709/11920214/d7edc2a3bae9/41598_2025_94052_Fig6_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1709/11920214/184a6e3f75bc/41598_2025_94052_Fig7_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1709/11920214/a1ad2e2d8694/41598_2025_94052_Fig8_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1709/11920214/f447c71412b0/41598_2025_94052_Fig9_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1709/11920214/449fe4409f76/41598_2025_94052_Fig10_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1709/11920214/019629e6a9c9/41598_2025_94052_Fig11_HTML.jpg

相似文献

1
Interactive exploration of CNN interpretability via coalitional game theory.通过合作博弈理论对卷积神经网络可解释性进行交互式探索。
Sci Rep. 2025 Mar 18;15(1):9261. doi: 10.1038/s41598-025-94052-8.
2
Learning feature relationships in CNN model via relational embedding convolution layer.通过关系嵌入卷积层学习 CNN 模型中的特征关系。
Neural Netw. 2024 Nov;179:106510. doi: 10.1016/j.neunet.2024.106510. Epub 2024 Jul 5.
3
An interpretable decision-support model for breast cancer diagnosis using histopathology images.一种使用组织病理学图像进行乳腺癌诊断的可解释决策支持模型。
J Pathol Inform. 2023 Jun 13;14:100319. doi: 10.1016/j.jpi.2023.100319. eCollection 2023.
4
Brain tumor segmentation and detection in MRI using convolutional neural networks and VGG16.使用卷积神经网络和VGG16在磁共振成像(MRI)中进行脑肿瘤分割与检测
Cancer Biomark. 2025 Mar;42(3):18758592241311184. doi: 10.1177/18758592241311184. Epub 2025 Apr 4.
5
CEFEs: A CNN Explainable Framework for ECG Signals.CEFEs:用于心电图信号的 CNN 可解释框架。
Artif Intell Med. 2021 May;115:102059. doi: 10.1016/j.artmed.2021.102059. Epub 2021 Mar 26.
6
Towards Explainable Detection of Alzheimer's Disease: A Fusion of Deep Convolutional Neural Network and Enhanced Weighted Fuzzy C-Mean.迈向阿尔茨海默病的可解释性检测:深度卷积神经网络与增强加权模糊C均值的融合
Curr Med Imaging. 2024;20:e15734056317205. doi: 10.2174/0115734056317205241014060633.
7
FP-CNN: Fuzzy pooling-based convolutional neural network for lung ultrasound image classification with explainable AI.FP-CNN:基于模糊池化的卷积神经网络,用于具有可解释人工智能的肺部超声图像分类。
Comput Biol Med. 2023 Oct;165:107407. doi: 10.1016/j.compbiomed.2023.107407. Epub 2023 Sep 1.
8
Explainable deep learning model for automatic mulberry leaf disease classification.用于桑叶病害自动分类的可解释深度学习模型。
Front Plant Sci. 2023 Sep 19;14:1175515. doi: 10.3389/fpls.2023.1175515. eCollection 2023.
9
Stable feature selection utilizing Graph Convolutional Neural Network and Layer-wise Relevance Propagation for biomarker discovery in breast cancer.利用图卷积神经网络和逐层相关性传播进行稳定特征选择,以发现乳腺癌的生物标志物。
Artif Intell Med. 2024 May;151:102840. doi: 10.1016/j.artmed.2024.102840. Epub 2024 Mar 11.
10
Visual Interpretation of Convolutional Neural Network Predictions in Classifying Medical Image Modalities.卷积神经网络预测在医学图像模态分类中的可视化解读
Diagnostics (Basel). 2019 Apr 3;9(2):38. doi: 10.3390/diagnostics9020038.

本文引用的文献

1
Efficient Interpretation of Deep Learning Models Using Graph Structure and Cooperative Game Theory: Application to ASD Biomarker Discovery.利用图结构和合作博弈论对深度学习模型进行有效解释:在自闭症谱系障碍生物标志物发现中的应用
Inf Process Med Imaging. 2019 Jun;11492:718-730. doi: 10.1007/978-3-030-20351-1_56. Epub 2019 May 22.
2
Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks.更快的 R-CNN:基于区域建议网络的实时目标检测。
IEEE Trans Pattern Anal Mach Intell. 2017 Jun;39(6):1137-1149. doi: 10.1109/TPAMI.2016.2577031. Epub 2016 Jun 6.
3
On Pixel-Wise Explanations for Non-Linear Classifier Decisions by Layer-Wise Relevance Propagation.
关于通过逐层相关性传播对非线性分类器决策进行逐像素解释
PLoS One. 2015 Jul 10;10(7):e0130140. doi: 10.1371/journal.pone.0130140. eCollection 2015.