Suppr超能文献

兴奋性抑制性递归神经网络中的刺激驱动和自发动力学用于序列表示。

Stimulus-Driven and Spontaneous Dynamics in Excitatory-Inhibitory Recurrent Neural Networks for Sequence Representation.

机构信息

Courant Institute of Mathematical Sciences, New York University, New York, NY 10012, U.S.A.

Courant Institute of Mathematical Sciences and Center for Neural Science, New York University, New York, NY 10012, USA.

出版信息

Neural Comput. 2021 Sep 16;33(10):2603-2645. doi: 10.1162/neco_a_01418.

Abstract

Recurrent neural networks (RNNs) have been widely used to model sequential neural dynamics ("neural sequences") of cortical circuits in cognitive and motor tasks. Efforts to incorporate biological constraints and Dale's principle will help elucidate the neural representations and mechanisms of underlying circuits. We trained an excitatory-inhibitory RNN to learn neural sequences in a supervised manner and studied the representations and dynamic attractors of the trained network. The trained RNN was robust to trigger the sequence in response to various input signals and interpolated a time-warped input for sequence representation. Interestingly, a learned sequence can repeat periodically when the RNN evolved beyond the duration of a single sequence. The eigenspectrum of the learned recurrent connectivity matrix with growing or damping modes, together with the RNN's nonlinearity, were adequate to generate a limit cycle attractor. We further examined the stability of dynamic attractors while training the RNN to learn two sequences. Together, our results provide a general framework for understanding neural sequence representation in the excitatory-inhibitory RNN.

摘要

递归神经网络 (RNNs) 已广泛用于模拟认知和运动任务中皮质电路的顺序神经动力学(“神经序列”)。努力结合生物约束和戴尔原则将有助于阐明基础电路的神经表示和机制。我们训练了一个兴奋性-抑制性 RNN 以监督方式学习神经序列,并研究了训练网络的表示和动态吸引子。训练后的 RNN 对各种输入信号的触发序列具有鲁棒性,并对序列表示进行时间扭曲输入的内插。有趣的是,当 RNN 的演化超过单个序列的持续时间时,学习到的序列可以周期性地重复。具有生长或阻尼模式的学习递归连接矩阵的特征谱,以及 RNN 的非线性,足以产生极限环吸引子。我们进一步研究了在训练 RNN 以学习两个序列时动态吸引子的稳定性。总之,我们的结果为理解兴奋性-抑制性 RNN 中的神经序列表示提供了一个通用框架。

相似文献

8
A Midbrain Inspired Recurrent Neural Network Model for Robust Change Detection.基于中脑的鲁棒性变化检测递归神经网络模型。
J Neurosci. 2022 Nov 2;42(44):8262-8283. doi: 10.1523/JNEUROSCI.0164-22.2022. Epub 2022 Sep 19.

引用本文的文献

2
A neural basis for learning sequential memory in brain loop structures.大脑回路结构中学习序列记忆的神经基础。
Front Comput Neurosci. 2024 Aug 5;18:1421458. doi: 10.3389/fncom.2024.1421458. eCollection 2024.

本文引用的文献

3
Gated Recurrent Units Viewed Through the Lens of Continuous Time Dynamical Systems.从连续时间动态系统视角看门控循环单元
Front Comput Neurosci. 2021 Jul 22;15:678158. doi: 10.3389/fncom.2021.678158. eCollection 2021.
6
Engineering recurrent neural networks from task-relevant manifolds and dynamics.从任务相关流形和动力学中设计递归神经网络。
PLoS Comput Biol. 2020 Aug 12;16(8):e1008128. doi: 10.1371/journal.pcbi.1008128. eCollection 2020 Aug.
7
Understanding the computation of time using neural network models.理解神经网络模型中的时间计算。
Proc Natl Acad Sci U S A. 2020 May 12;117(19):10530-10540. doi: 10.1073/pnas.1921609117. Epub 2020 Apr 27.
8
Backpropagation and the brain.反向传播与大脑。
Nat Rev Neurosci. 2020 Jun;21(6):335-346. doi: 10.1038/s41583-020-0277-3. Epub 2020 Apr 17.

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验