Suppr超能文献

皮层神经网络和卷积神经网络中的上下文整合

Contextual Integration in Cortical and Convolutional Neural Networks.

作者信息

Iyer Ramakrishnan, Hu Brian, Mihalas Stefan

机构信息

Modeling and Theory, Allen Institute for Brain Science, Seattle, WA, United States.

出版信息

Front Comput Neurosci. 2020 Apr 23;14:31. doi: 10.3389/fncom.2020.00031. eCollection 2020.

Abstract

It has been suggested that neurons can represent sensory input using probability distributions and neural circuits can perform probabilistic inference. Lateral connections between neurons have been shown to have non-random connectivity and modulate responses to stimuli within the classical receptive field. Large-scale efforts mapping local cortical connectivity describe cell type specific connections from inhibitory neurons and like-to-like connectivity between excitatory neurons. To relate the observed connectivity to computations, we propose a neuronal network model that approximates Bayesian inference of the probability of different features being present at different image locations. We show that the lateral connections between excitatory neurons in a circuit implementing contextual integration in this should depend on correlations between unit activities, minus a global inhibitory drive. The model naturally suggests the need for two types of inhibitory gates (normalization, surround inhibition). First, using natural scene statistics and classical receptive fields corresponding to simple cells parameterized with data from mouse primary visual cortex, we show that the predicted connectivity qualitatively matches with that measured in mouse cortex: neurons with similar orientation tuning have stronger connectivity, and both excitatory and inhibitory connectivity have a modest spatial extent, comparable to that observed in mouse visual cortex. We incorporate lateral connections learned using this model into convolutional neural networks. Features are defined by supervised learning on the task, and the lateral connections provide an unsupervised learning of feature context in multiple layers. Since the lateral connections provide contextual information when the feedforward input is locally corrupted, we show that incorporating such lateral connections into convolutional neural networks makes them more robust to noise and leads to better performance on noisy versions of the MNIST dataset. Decomposing the predicted lateral connectivity matrices into low-rank and sparse components introduces additional cell types into these networks. We explore effects of cell-type specific perturbations on network computation. Our framework can potentially be applied to networks trained on other tasks, with the learned lateral connections aiding computations implemented by feedforward connections when the input is unreliable and demonstrate the potential usefulness of combining supervised and unsupervised learning techniques in real-world vision tasks.

摘要

有人提出,神经元可以使用概率分布来表示感觉输入,并且神经回路可以执行概率推理。已表明神经元之间的侧向连接具有非随机连接性,并能调节经典感受野内对刺激的反应。大规模绘制局部皮质连接性的研究描述了抑制性神经元的细胞类型特异性连接以及兴奋性神经元之间的同类连接。为了将观察到的连接性与计算联系起来,我们提出了一个神经网络模型,该模型近似于对不同图像位置存在不同特征的概率进行贝叶斯推理。我们表明,在实现上下文整合的回路中,兴奋性神经元之间的侧向连接应取决于单元活动之间的相关性,减去全局抑制驱动。该模型自然地表明需要两种类型的抑制门(归一化、外周抑制)。首先,利用自然场景统计数据以及与用来自小鼠初级视觉皮层的数据参数化的简单细胞相对应的经典感受野,我们表明预测的连接性在质量上与在小鼠皮层中测量的连接性相匹配:具有相似方向调谐的神经元具有更强的连接性,并且兴奋性和抑制性连接都具有适度的空间范围,与在小鼠视觉皮层中观察到的相当。我们将使用该模型学习到的侧向连接纳入卷积神经网络。特征通过对任务的监督学习来定义,并且侧向连接在多层中提供对特征上下文的无监督学习。由于当正向输入局部受损时,侧向连接会提供上下文信息,我们表明将这种侧向连接纳入卷积神经网络会使它们对噪声更具鲁棒性,并在MNIST数据集的噪声版本上产生更好的性能。将预测的侧向连接矩阵分解为低秩和稀疏分量会在这些网络中引入额外的细胞类型。我们探索细胞类型特异性扰动对网络计算的影响。我们的框架有可能应用于在其他任务上训练的网络,当输入不可靠时,学习到的侧向连接有助于由正向连接执行的计算,并证明在实际视觉任务中结合监督和无监督学习技术的潜在有用性。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验