Fowler School of Engineering, Chapman University, United States of America; Department of Computer Science, Bren School of Information and Computer Sciences, University of California, Irvine, United States of America.
Fowler School of Engineering, Chapman University, United States of America.
Neural Netw. 2020 Jun;126:235-249. doi: 10.1016/j.neunet.2020.03.016. Epub 2020 Mar 25.
Weight-sharing is one of the pillars behind Convolutional Neural Networks and their successes. However, in physical neural systems such as the brain, weight-sharing is implausible. This discrepancy raises the fundamental question of whether weight-sharing is necessary. If so, to which degree of precision? If not, what are the alternatives? The goal of this study is to investigate these questions, primarily through simulations where the weight-sharing assumption is relaxed. Taking inspiration from neural circuitry, we explore the use of Free Convolutional Networks and neurons with variable connection patterns. Using Free Convolutional Networks, we show that while weight-sharing is a pragmatic optimization approach, it is not a necessity in computer vision applications. Furthermore, Free Convolutional Networks match the performance observed in standard architectures when trained using properly translated data (akin to video). Under the assumption of translationally augmented data, Free Convolutional Networks learn translationally invariant representations that yield an approximate form of weight-sharing.
权重共享是卷积神经网络及其成功的基础之一。然而,在物理神经系统(如大脑)中,权重共享是不太可能的。这种差异提出了一个基本问题,即权重共享是否必要。如果是,那么需要达到何种精度?如果不是,那么有哪些替代方案?本研究的目的是通过放松权重共享假设的模拟来探讨这些问题。我们从神经电路中获得灵感,探索使用自由卷积网络和具有可变连接模式的神经元。使用自由卷积网络,我们表明虽然权重共享是一种实用的优化方法,但在计算机视觉应用中并不是必需的。此外,当使用经过适当翻译的数据(类似于视频)进行训练时,自由卷积网络可以匹配标准架构中观察到的性能。在翻译增强数据的假设下,自由卷积网络学习平移不变的表示,从而产生一种近似的权重共享形式。