Liu Jian K, Buonomano Dean V
Department of Mathematics and Neurobiology, University of California, Los Angeles, Los Angeles, California 90095, USA.
J Neurosci. 2009 Oct 21;29(42):13172-81. doi: 10.1523/JNEUROSCI.2358-09.2009.
Complex neural dynamics produced by the recurrent architecture of neocortical circuits is critical to the cortex's computational power. However, the synaptic learning rules underlying the creation of stable propagation and reproducible neural trajectories within recurrent networks are not understood. Here, we examined synaptic learning rules with the goal of creating recurrent networks in which evoked activity would: (1) propagate throughout the entire network in response to a brief stimulus while avoiding runaway excitation; (2) exhibit spatially and temporally sparse dynamics; and (3) incorporate multiple neural trajectories, i.e., different input patterns should elicit distinct trajectories. We established that an unsupervised learning rule, termed presynaptic-dependent scaling (PSD), can achieve the proposed network dynamics. To quantify the structure of the trained networks, we developed a recurrence index, which revealed that presynaptic-dependent scaling generated a functionally feedforward network when training with a single stimulus. However, training the network with multiple input patterns established that: (1) multiple non-overlapping stable trajectories can be embedded in the network; and (2) the structure of the network became progressively more complex (recurrent) as the number of training patterns increased. In addition, we determined that PSD and spike-timing-dependent plasticity operating in parallel improved the ability of the network to incorporate multiple and less variable trajectories, but also shortened the duration of the neural trajectory. Together, these results establish one of the first learning rules that can embed multiple trajectories, each of which recruits all neurons, within recurrent neural networks in a self-organizing manner.
由新皮层回路的循环架构产生的复杂神经动力学对于皮层的计算能力至关重要。然而,循环网络中产生稳定传播和可重复神经轨迹的突触学习规则尚不清楚。在这里,我们研究了突触学习规则,目标是创建循环网络,其中诱发活动将:(1) 在短暂刺激下在整个网络中传播,同时避免失控兴奋;(2) 表现出空间和时间上稀疏的动力学;(3) 纳入多个神经轨迹,即不同的输入模式应引发不同的轨迹。我们确定了一种无监督学习规则,称为突触前依赖缩放 (PSD),可以实现所提出的网络动力学。为了量化训练网络的结构,我们开发了一个循环指数,该指数表明,当用单个刺激进行训练时,突触前依赖缩放会产生一个功能上前馈的网络。然而,用多个输入模式训练网络表明:(1) 多个不重叠的稳定轨迹可以嵌入网络中;(2) 随着训练模式数量的增加,网络结构逐渐变得更加复杂(循环)。此外,我们确定并行运行的PSD和尖峰时间依赖可塑性提高了网络纳入多个且变化较小的轨迹的能力,但也缩短了神经轨迹的持续时间。总之,这些结果建立了首批学习规则之一,该规则可以以自组织方式在循环神经网络中嵌入多个轨迹,每个轨迹都招募所有神经元。