Suppr超能文献

基于局部导向神经网络的可解释人工智能

Interpretable Artificial Intelligence through Locality Guided Neural Networks.

机构信息

Ryerson University, 350 Victoria St, M5B 2K3, Toronto, Ontario, Canada.

出版信息

Neural Netw. 2022 Nov;155:58-73. doi: 10.1016/j.neunet.2022.08.009. Epub 2022 Aug 15.

Abstract

In current deep learning architectures, each of the deeper layers in networks tends to contain hundreds of unorganized neurons which makes it hard for humans to understand how they interact with each other. By organizing the neurons using correlation as the criteria, humans can observe how clusters of neighbouring neurons interact with each other. Research in Explainable Artificial Intelligence (XAI) aims to all alleviate the black-box nature of current AI methods and make them understandable by humans. In this paper, we extend our previous algorithm for XAI in deep learning, called Locality Guided Neural Network (LGNN). LGNN preserves locality between neighbouring neurons within each layer of a deep network during training. Motivated by Self-Organizing Maps (SOMs), the goal is to enforce a local topology on each layer of a deep network such that neighbouring neurons are highly correlated with each other. Our algorithm can be easily plugged into current state of the art Convolutional Neural Network (CNN) models to make the neighbouring neurons more correlated. A cluster of neighbouring neurons activating for a class makes the network both quantitatively and qualitatively more interpretable when visualized, as we show through our experiments. This paper focuses on image processing with CNNs, but can theoretically be applied to any type of deep learning architecture. In our experiments, we train VGG and WRN networks for image classification on CIFAR100 and Imagenette. Our experiments analyse different perceptible clusters of activations in response to different input classes.

摘要

在当前的深度学习架构中,网络中的每一层都包含数百个未组织的神经元,这使得人类很难理解它们之间是如何相互作用的。通过使用相关性作为标准来组织神经元,人类可以观察到相邻神经元簇之间的相互作用。可解释人工智能(XAI)的研究旨在减轻当前 AI 方法的黑盒性质,并使其为人类所理解。在本文中,我们扩展了我们之前在深度学习中的 XAI 算法,称为局部引导神经网络(LGNN)。LGNN 在训练过程中保留了深层网络中每一层中相邻神经元之间的局部性。受自组织映射(SOM)的启发,目标是在深层网络的每一层上施加局部拓扑,使得相邻神经元之间具有高度相关性。我们的算法可以很容易地插入到当前最先进的卷积神经网络(CNN)模型中,以使相邻神经元具有更高的相关性。当可视化时,一组激活的相邻神经元对一个类进行激活,使网络在定量和定性方面都更具可解释性,正如我们通过实验所展示的。本文专注于使用 CNN 进行图像处理,但理论上可以应用于任何类型的深度学习架构。在我们的实验中,我们针对 CIFAR100 和 Imagenette 上的图像分类训练了 VGG 和 WRN 网络。我们的实验分析了不同输入类响应的不同可感知激活簇。

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验