Suppr超能文献

以自组织方式将多个轨迹嵌入模拟循环神经网络中。

Embedding multiple trajectories in simulated recurrent neural networks in a self-organizing manner.

作者信息

Liu Jian K, Buonomano Dean V

机构信息

Department of Mathematics and Neurobiology, University of California, Los Angeles, Los Angeles, California 90095, USA.

出版信息

J Neurosci. 2009 Oct 21;29(42):13172-81. doi: 10.1523/JNEUROSCI.2358-09.2009.

Abstract

Complex neural dynamics produced by the recurrent architecture of neocortical circuits is critical to the cortex's computational power. However, the synaptic learning rules underlying the creation of stable propagation and reproducible neural trajectories within recurrent networks are not understood. Here, we examined synaptic learning rules with the goal of creating recurrent networks in which evoked activity would: (1) propagate throughout the entire network in response to a brief stimulus while avoiding runaway excitation; (2) exhibit spatially and temporally sparse dynamics; and (3) incorporate multiple neural trajectories, i.e., different input patterns should elicit distinct trajectories. We established that an unsupervised learning rule, termed presynaptic-dependent scaling (PSD), can achieve the proposed network dynamics. To quantify the structure of the trained networks, we developed a recurrence index, which revealed that presynaptic-dependent scaling generated a functionally feedforward network when training with a single stimulus. However, training the network with multiple input patterns established that: (1) multiple non-overlapping stable trajectories can be embedded in the network; and (2) the structure of the network became progressively more complex (recurrent) as the number of training patterns increased. In addition, we determined that PSD and spike-timing-dependent plasticity operating in parallel improved the ability of the network to incorporate multiple and less variable trajectories, but also shortened the duration of the neural trajectory. Together, these results establish one of the first learning rules that can embed multiple trajectories, each of which recruits all neurons, within recurrent neural networks in a self-organizing manner.

摘要

由新皮层回路的循环架构产生的复杂神经动力学对于皮层的计算能力至关重要。然而,循环网络中产生稳定传播和可重复神经轨迹的突触学习规则尚不清楚。在这里,我们研究了突触学习规则,目标是创建循环网络,其中诱发活动将:(1) 在短暂刺激下在整个网络中传播,同时避免失控兴奋;(2) 表现出空间和时间上稀疏的动力学;(3) 纳入多个神经轨迹,即不同的输入模式应引发不同的轨迹。我们确定了一种无监督学习规则,称为突触前依赖缩放 (PSD),可以实现所提出的网络动力学。为了量化训练网络的结构,我们开发了一个循环指数,该指数表明,当用单个刺激进行训练时,突触前依赖缩放会产生一个功能上前馈的网络。然而,用多个输入模式训练网络表明:(1) 多个不重叠的稳定轨迹可以嵌入网络中;(2) 随着训练模式数量的增加,网络结构逐渐变得更加复杂(循环)。此外,我们确定并行运行的PSD和尖峰时间依赖可塑性提高了网络纳入多个且变化较小的轨迹的能力,但也缩短了神经轨迹的持续时间。总之,这些结果建立了首批学习规则之一,该规则可以以自组织方式在循环神经网络中嵌入多个轨迹,每个轨迹都招募所有神经元。

相似文献

引用本文的文献

4
Neural Sequences and the Encoding of Time.神经序列与时间编码。
Adv Exp Med Biol. 2024;1455:81-93. doi: 10.1007/978-3-031-60183-5_5.
7
The neural bases for timing of durations.持续时间计时的神经基础。
Nat Rev Neurosci. 2022 Nov;23(11):646-665. doi: 10.1038/s41583-022-00623-3. Epub 2022 Sep 12.
9
Dissecting cascade computational components in spiking neural networks.解析尖峰神经网络中的级联计算组件。
PLoS Comput Biol. 2021 Nov 29;17(11):e1009640. doi: 10.1371/journal.pcbi.1009640. eCollection 2021 Nov.

本文引用的文献

1
Memory without feedback in a neural network.神经网络中无反馈的记忆。
Neuron. 2009 Feb 26;61(4):621-34. doi: 10.1016/j.neuron.2008.12.012.
3
Memory traces in dynamical systems.动态系统中的记忆痕迹。
Proc Natl Acad Sci U S A. 2008 Dec 2;105(48):18970-5. doi: 10.1073/pnas.0804451105. Epub 2008 Nov 19.

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验