Department of Mathematics, University of Nebraska-Lincoln, Lincoln, NE 68588, U.S.A.
Neural Comput. 2013 Nov;25(11):2858-903. doi: 10.1162/NECO_a_00504. Epub 2013 Jul 29.
Networks of neurons in the brain encode preferred patterns of neural activity via their synaptic connections. Despite receiving considerable attention, the precise relationship between network connectivity and encoded patterns is still poorly understood. Here we consider this problem for networks of threshold-linear neurons whose computational function is to learn and store a set of binary patterns (e.g., a neural code) as "permitted sets" of the network. We introduce a simple encoding rule that selectively turns "on" synapses between neurons that coappear in one or more patterns. The rule uses synapses that are binary, in the sense of having only two states ("on" or "off"), but also heterogeneous, with weights drawn from an underlying synaptic strength matrix S. Our main results precisely describe the stored patterns that result from the encoding rule, including unintended "spurious" states, and give an explicit characterization of the dependence on S. In particular, we find that binary patterns are successfully stored in these networks when the excitatory connections between neurons are geometrically balanced--i.e., they satisfy a set of geometric constraints. Furthermore, we find that certain types of neural codes are natural in the context of these networks, meaning that the full code can be accurately learned from a highly undersampled set of patterns. Interestingly, many commonly observed neural codes in cortical and hippocampal areas are natural in this sense. As an application, we construct networks that encode hippocampal place field codes nearly exactly, following presentation of only a small fraction of patterns. To obtain our results, we prove new theorems using classical ideas from convex and distance geometry, such as Cayley-Menger determinants, revealing a novel connection between these areas of mathematics and coding properties of neural networks.
大脑中的神经元网络通过其突触连接来编码首选的神经活动模式。尽管受到了相当多的关注,但网络连接性和编码模式之间的确切关系仍未得到很好的理解。在这里,我们考虑了具有阈值线性神经元的网络的这个问题,其计算功能是学习和存储一组二进制模式(例如,神经码)作为网络的“允许集”。我们引入了一种简单的编码规则,该规则选择性地打开在一个或多个模式中共同出现的神经元之间的突触。该规则使用的突触是二进制的,即只有两种状态(“开”或“关”),但也是异质的,权重来自于基础突触强度矩阵 S。我们的主要结果准确地描述了由编码规则产生的存储模式,包括意外的“虚假”状态,并明确地描述了对 S 的依赖性。特别是,我们发现当神经元之间的兴奋性连接是几何平衡时,这些网络可以成功地存储二进制模式 - 即,它们满足一组几何约束。此外,我们发现某些类型的神经码在这些网络的背景下是自然的,这意味着完整的代码可以从高度欠采样的模式集中准确地学习。有趣的是,皮质和海马区中许多常见的神经码在这种意义上是自然的。作为应用,我们构建了几乎完全编码海马体位置场码的网络,只需呈现一小部分模式。为了获得我们的结果,我们使用凸几何和距离几何的经典思想(如 Cayley-Menger 行列式)证明了新的定理,揭示了这些数学领域和神经网络编码特性之间的新联系。