Suppr超能文献

文字之窗:利用词嵌入探索卷积神经网络的习得表示。

Words as a window: Using word embeddings to explore the learned representations of Convolutional Neural Networks.

机构信息

University of Victoria, Department of Computer Science, 3800 Finnerty Road, Victoria, British Columbia, Canada.

University of Alberta, Department of Computing Science & Department of Psychology, 116 St. and 85 Ave., Edmonton, Alberta, Canada.

出版信息

Neural Netw. 2021 May;137:63-74. doi: 10.1016/j.neunet.2020.12.009. Epub 2021 Jan 22.

Abstract

As deep neural net architectures minimize loss, they accumulate information in a hierarchy of learned representations that ultimately serve the network's final goal. Different architectures tackle this problem in slightly different ways, but all create intermediate representational spaces built to inform their final prediction. Here we show that very different neural networks trained on two very different tasks build knowledge representations that display similar underlying patterns. Namely, we show that the representational spaces of several distributional semantic models bear a remarkable resemblance to several Convolutional Neural Network (CNN) architectures (trained for image classification). We use this information to explore the network behavior of CNNs (1) in pretrained models, (2) during training, and (3) during adversarial attacks. We use these findings to motivate several applications aimed at improving future research on CNNs. Our work illustrates the power of using one model to explore another, gives new insights into the function of CNN models, and provides a framework for others to perform similar analyses when developing new architectures. We show that one neural network model can provide a window into understanding another.

摘要

随着深度神经网络架构最小化损失,它们会在学习的表示层次结构中积累信息,这些信息最终将服务于网络的最终目标。不同的架构以略有不同的方式解决这个问题,但都创建了中间表示空间,旨在为最终预测提供信息。在这里,我们表明,在两个非常不同的任务上训练的非常不同的神经网络会构建显示出相似潜在模式的知识表示。也就是说,我们表明,几个分布语义模型的表示空间与几个卷积神经网络(CNN)架构(用于图像分类训练)非常相似。我们利用这些信息来探索 CNN 的网络行为(1)在预训练模型中,(2)在训练期间,以及(3)在对抗攻击期间。我们利用这些发现来激发几项旨在改进未来 CNN 研究的应用。我们的工作说明了使用一个模型来探索另一个模型的强大功能,为 CNN 模型的功能提供了新的见解,并为其他人在开发新架构时进行类似分析提供了框架。我们表明,一个神经网络模型可以提供理解另一个模型的窗口。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验