• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

卷积神经网络的语义解释:是什么让猫成为猫?

Semantic Interpretation for Convolutional Neural Networks: What Makes a Cat a Cat?

机构信息

BIC-ESAT, ERE, and SKLTCS, College of Engineering, Peking University, Beijing, 100871, P. R. China.

Eastern Institute for Advanced Study, Yongriver Institute of Technology, Ningbo, Zhejiang, 315200, P. R. China.

出版信息

Adv Sci (Weinh). 2022 Dec;9(35):e2204723. doi: 10.1002/advs.202204723. Epub 2022 Oct 10.

DOI:10.1002/advs.202204723
PMID:36216585
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC9762288/
Abstract

The interpretability of deep neural networks has attracted increasing attention in recent years, and several methods have been created to interpret the "black box" model. Fundamental limitations remain, however, that impede the pace of understanding the networks, especially the extraction of understandable semantic space. In this work, the framework of semantic explainable artificial intelligence (S-XAI) is introduced, which utilizes a sample compression method based on the distinctive row-centered principal component analysis (PCA) that is different from the conventional column-centered PCA to obtain common traits of samples from the convolutional neural network (CNN), and extracts understandable semantic spaces on the basis of discovered semantically sensitive neurons and visualization techniques. Statistical interpretation of the semantic space is also provided, and the concept of semantic probability is proposed. The experimental results demonstrate that S-XAI is effective in providing a semantic interpretation for the CNN, and offers broad usage, including trustworthiness assessment and semantic sample searching.

摘要

近年来,深度神经网络的可解释性引起了越来越多的关注,已经有几种方法被用于解释“黑盒”模型。然而,仍然存在一些基本的限制,阻碍了人们对网络的理解速度,特别是对可理解的语义空间的提取。在这项工作中,引入了语义可解释人工智能(S-XAI)的框架,该框架利用了一种基于独特的行中心化主成分分析(PCA)的样本压缩方法,与传统的基于列中心化 PCA 不同,该方法可以从卷积神经网络(CNN)中获得样本的共同特征,并基于发现的语义敏感神经元和可视化技术提取可理解的语义空间。还提供了语义空间的统计解释,并提出了语义概率的概念。实验结果表明,S-XAI 能够有效地为 CNN 提供语义解释,并具有广泛的用途,包括可信度评估和语义样本搜索。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8b88/9762288/56e0cd7cbe60/ADVS-9-2204723-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8b88/9762288/1b1518ebd4e0/ADVS-9-2204723-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8b88/9762288/e08adea59cb6/ADVS-9-2204723-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8b88/9762288/a5c3698a1a09/ADVS-9-2204723-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8b88/9762288/15c439a7194c/ADVS-9-2204723-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8b88/9762288/6ad8886a84b7/ADVS-9-2204723-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8b88/9762288/47926f57291e/ADVS-9-2204723-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8b88/9762288/56e0cd7cbe60/ADVS-9-2204723-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8b88/9762288/1b1518ebd4e0/ADVS-9-2204723-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8b88/9762288/e08adea59cb6/ADVS-9-2204723-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8b88/9762288/a5c3698a1a09/ADVS-9-2204723-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8b88/9762288/15c439a7194c/ADVS-9-2204723-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8b88/9762288/6ad8886a84b7/ADVS-9-2204723-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8b88/9762288/47926f57291e/ADVS-9-2204723-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8b88/9762288/56e0cd7cbe60/ADVS-9-2204723-g001.jpg

相似文献

1
Semantic Interpretation for Convolutional Neural Networks: What Makes a Cat a Cat?卷积神经网络的语义解释:是什么让猫成为猫?
Adv Sci (Weinh). 2022 Dec;9(35):e2204723. doi: 10.1002/advs.202204723. Epub 2022 Oct 10.
2
Interpretable Artificial Intelligence through Locality Guided Neural Networks.基于局部导向神经网络的可解释人工智能
Neural Netw. 2022 Nov;155:58-73. doi: 10.1016/j.neunet.2022.08.009. Epub 2022 Aug 15.
3
Filter pruning for convolutional neural networks in semantic image segmentation.卷积神经网络的语义图像分割中的滤波器剪枝。
Neural Netw. 2024 Jan;169:713-732. doi: 10.1016/j.neunet.2023.11.010. Epub 2023 Nov 7.
4
Voice pathology detection using optimized convolutional neural networks and explainable artificial intelligence-based analysis.基于优化卷积神经网络和可解释人工智能的语音病理学检测。
Comput Methods Biomech Biomed Engin. 2024 Nov;27(14):2041-2057. doi: 10.1080/10255842.2023.2270102. Epub 2023 Oct 18.
5
A novel approach of brain-computer interfacing (BCI) and Grad-CAM based explainable artificial intelligence: Use case scenario for smart healthcare.一种新的脑机接口 (BCI) 和基于 Grad-CAM 的可解释人工智能方法:智能医疗保健用例场景。
J Neurosci Methods. 2024 Aug;408:110159. doi: 10.1016/j.jneumeth.2024.110159. Epub 2024 May 7.
6
Survey of explainable artificial intelligence techniques for biomedical imaging with deep neural networks.基于深度神经网络的生物医学成像可解释人工智能技术综述。
Comput Biol Med. 2023 Apr;156:106668. doi: 10.1016/j.compbiomed.2023.106668. Epub 2023 Feb 18.
7
Semantic segmentation of human oocyte images using deep neural networks.基于深度学习的人卵母细胞图像语义分割。
Biomed Eng Online. 2021 Apr 23;20(1):40. doi: 10.1186/s12938-021-00864-w.
8
Interpretable neural networks: principles and applications.可解释神经网络:原理与应用
Front Artif Intell. 2023 Oct 13;6:974295. doi: 10.3389/frai.2023.974295. eCollection 2023.
9
A Lightweight Semantic Segmentation Algorithm Based on Deep Convolutional Neural Networks.基于深度卷积神经网络的轻量级语义分割算法。
Comput Intell Neurosci. 2022 Sep 6;2022:5339664. doi: 10.1155/2022/5339664. eCollection 2022.
10
Application of Dual-Channel Convolutional Neural Network Algorithm in Semantic Feature Analysis of English Text Big Data.双通道卷积神经网络算法在英文文本大数据语义特征分析中的应用。
Comput Intell Neurosci. 2021 Nov 6;2021:7085412. doi: 10.1155/2021/7085412. eCollection 2021.

引用本文的文献

1
Electrophilicity Modulation for Sub-ppm Visualization and Discrimination of EDA.用于亚ppm可视化和电子给体-受体(EDA)区分的亲电性调制
Adv Sci (Weinh). 2024 May;11(18):e2400361. doi: 10.1002/advs.202400361. Epub 2024 Mar 6.
2
Articular cartilage repair biomaterials: strategies and applications.关节软骨修复生物材料:策略与应用
Mater Today Bio. 2024 Jan 6;24:100948. doi: 10.1016/j.mtbio.2024.100948. eCollection 2024 Feb.

本文引用的文献

1
Explainable neural networks that simulate reasoning.可解释的模拟推理神经网络。
Nat Comput Sci. 2021 Sep;1(9):607-618. doi: 10.1038/s43588-021-00132-w. Epub 2021 Sep 22.
2
A review on genetic algorithm: past, present, and future.关于遗传算法的综述:过去、现在与未来。
Multimed Tools Appl. 2021;80(5):8091-8126. doi: 10.1007/s11042-020-10139-6. Epub 2020 Oct 31.
3
Extraction of an Explanatory Graph to Interpret a CNN.用于解释卷积神经网络的解释性图提取
IEEE Trans Pattern Anal Mach Intell. 2021 Nov;43(11):3863-3877. doi: 10.1109/TPAMI.2020.2992207. Epub 2021 Oct 1.
4
Interpretable CNNs for Object Classification.可解释卷积神经网络的目标分类。
IEEE Trans Pattern Anal Mach Intell. 2021 Oct;43(10):3416-3431. doi: 10.1109/TPAMI.2020.2982882. Epub 2021 Sep 2.
5
LitVar: a semantic search engine for linking genomic variant data in PubMed and PMC.LitVar:一个语义搜索引擎,用于在 PubMed 和 PMC 中链接基因组变异数据。
Nucleic Acids Res. 2018 Jul 2;46(W1):W530-W536. doi: 10.1093/nar/gky355.
6
Principal component analysis: a review and recent developments.主成分分析:综述与最新进展
Philos Trans A Math Phys Eng Sci. 2016 Apr 13;374(2065):20150202. doi: 10.1098/rsta.2015.0202.
7
SLIC superpixels compared to state-of-the-art superpixel methods.SLIC 超像素与最先进的超像素方法比较。
IEEE Trans Pattern Anal Mach Intell. 2012 Nov;34(11):2274-82. doi: 10.1109/TPAMI.2012.120.