• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

基于局部导向神经网络的可解释人工智能

Interpretable Artificial Intelligence through Locality Guided Neural Networks.

机构信息

Ryerson University, 350 Victoria St, M5B 2K3, Toronto, Ontario, Canada.

出版信息

Neural Netw. 2022 Nov;155:58-73. doi: 10.1016/j.neunet.2022.08.009. Epub 2022 Aug 15.

DOI:10.1016/j.neunet.2022.08.009
PMID:36041281
Abstract

In current deep learning architectures, each of the deeper layers in networks tends to contain hundreds of unorganized neurons which makes it hard for humans to understand how they interact with each other. By organizing the neurons using correlation as the criteria, humans can observe how clusters of neighbouring neurons interact with each other. Research in Explainable Artificial Intelligence (XAI) aims to all alleviate the black-box nature of current AI methods and make them understandable by humans. In this paper, we extend our previous algorithm for XAI in deep learning, called Locality Guided Neural Network (LGNN). LGNN preserves locality between neighbouring neurons within each layer of a deep network during training. Motivated by Self-Organizing Maps (SOMs), the goal is to enforce a local topology on each layer of a deep network such that neighbouring neurons are highly correlated with each other. Our algorithm can be easily plugged into current state of the art Convolutional Neural Network (CNN) models to make the neighbouring neurons more correlated. A cluster of neighbouring neurons activating for a class makes the network both quantitatively and qualitatively more interpretable when visualized, as we show through our experiments. This paper focuses on image processing with CNNs, but can theoretically be applied to any type of deep learning architecture. In our experiments, we train VGG and WRN networks for image classification on CIFAR100 and Imagenette. Our experiments analyse different perceptible clusters of activations in response to different input classes.

摘要

在当前的深度学习架构中,网络中的每一层都包含数百个未组织的神经元,这使得人类很难理解它们之间是如何相互作用的。通过使用相关性作为标准来组织神经元,人类可以观察到相邻神经元簇之间的相互作用。可解释人工智能(XAI)的研究旨在减轻当前 AI 方法的黑盒性质,并使其为人类所理解。在本文中,我们扩展了我们之前在深度学习中的 XAI 算法,称为局部引导神经网络(LGNN)。LGNN 在训练过程中保留了深层网络中每一层中相邻神经元之间的局部性。受自组织映射(SOM)的启发,目标是在深层网络的每一层上施加局部拓扑,使得相邻神经元之间具有高度相关性。我们的算法可以很容易地插入到当前最先进的卷积神经网络(CNN)模型中,以使相邻神经元具有更高的相关性。当可视化时,一组激活的相邻神经元对一个类进行激活,使网络在定量和定性方面都更具可解释性,正如我们通过实验所展示的。本文专注于使用 CNN 进行图像处理,但理论上可以应用于任何类型的深度学习架构。在我们的实验中,我们针对 CIFAR100 和 Imagenette 上的图像分类训练了 VGG 和 WRN 网络。我们的实验分析了不同输入类响应的不同可感知激活簇。

相似文献

1
Interpretable Artificial Intelligence through Locality Guided Neural Networks.基于局部导向神经网络的可解释人工智能
Neural Netw. 2022 Nov;155:58-73. doi: 10.1016/j.neunet.2022.08.009. Epub 2022 Aug 15.
2
Semantic Interpretation for Convolutional Neural Networks: What Makes a Cat a Cat?卷积神经网络的语义解释:是什么让猫成为猫?
Adv Sci (Weinh). 2022 Dec;9(35):e2204723. doi: 10.1002/advs.202204723. Epub 2022 Oct 10.
3
Survey of explainable artificial intelligence techniques for biomedical imaging with deep neural networks.基于深度神经网络的生物医学成像可解释人工智能技术综述。
Comput Biol Med. 2023 Apr;156:106668. doi: 10.1016/j.compbiomed.2023.106668. Epub 2023 Feb 18.
4
DeepXplainer: An interpretable deep learning based approach for lung cancer detection using explainable artificial intelligence.深演析:一种基于可解释人工智能的用于肺癌检测的可解释深度学习方法。
Comput Methods Programs Biomed. 2024 Jan;243:107879. doi: 10.1016/j.cmpb.2023.107879. Epub 2023 Oct 24.
5
Toward explainable AI-empowered cognitive health assessment.迈向可解释人工智能赋能的认知健康评估。
Front Public Health. 2023 Mar 9;11:1024195. doi: 10.3389/fpubh.2023.1024195. eCollection 2023.
6
Explainability of deep neural networks for MRI analysis of brain tumors.深度神经网络在脑肿瘤 MRI 分析中的可解释性。
Int J Comput Assist Radiol Surg. 2022 Sep;17(9):1673-1683. doi: 10.1007/s11548-022-02619-x. Epub 2022 Apr 23.
7
Voice pathology detection using optimized convolutional neural networks and explainable artificial intelligence-based analysis.基于优化卷积神经网络和可解释人工智能的语音病理学检测。
Comput Methods Biomech Biomed Engin. 2024 Nov;27(14):2041-2057. doi: 10.1080/10255842.2023.2270102. Epub 2023 Oct 18.
8
CAManim: Animating end-to-end network activation maps.CAManim:端到端网络激活图动画。
PLoS One. 2024 Jun 18;19(6):e0296985. doi: 10.1371/journal.pone.0296985. eCollection 2024.
9
The deep arbitrary polynomial chaos neural network or how Deep Artificial Neural Networks could benefit from data-driven homogeneous chaos theory.深度任意多项式混沌神经网络或深度人工神经网络如何从数据驱动的均匀混沌理论中受益。
Neural Netw. 2023 Sep;166:85-104. doi: 10.1016/j.neunet.2023.06.036. Epub 2023 Jul 10.
10
Human attention guided explainable artificial intelligence for computer vision models.人类注意力引导的计算机视觉模型可解释人工智能。
Neural Netw. 2024 Sep;177:106392. doi: 10.1016/j.neunet.2024.106392. Epub 2024 May 15.

引用本文的文献

1
A general framework for interpretable neural learning based on local information-theoretic goal functions.基于局部信息论目标函数的可解释神经学习通用框架。
Proc Natl Acad Sci U S A. 2025 Mar 11;122(10):e2408125122. doi: 10.1073/pnas.2408125122. Epub 2025 Mar 5.