Courant Institute of Mathematical Sciences, New York University, New York, NY 10012, U.S.A.
Courant Institute of Mathematical Sciences and Center for Neural Science, New York University, New York, NY 10012, USA.
Neural Comput. 2021 Sep 16;33(10):2603-2645. doi: 10.1162/neco_a_01418.
Recurrent neural networks (RNNs) have been widely used to model sequential neural dynamics ("neural sequences") of cortical circuits in cognitive and motor tasks. Efforts to incorporate biological constraints and Dale's principle will help elucidate the neural representations and mechanisms of underlying circuits. We trained an excitatory-inhibitory RNN to learn neural sequences in a supervised manner and studied the representations and dynamic attractors of the trained network. The trained RNN was robust to trigger the sequence in response to various input signals and interpolated a time-warped input for sequence representation. Interestingly, a learned sequence can repeat periodically when the RNN evolved beyond the duration of a single sequence. The eigenspectrum of the learned recurrent connectivity matrix with growing or damping modes, together with the RNN's nonlinearity, were adequate to generate a limit cycle attractor. We further examined the stability of dynamic attractors while training the RNN to learn two sequences. Together, our results provide a general framework for understanding neural sequence representation in the excitatory-inhibitory RNN.
递归神经网络 (RNNs) 已广泛用于模拟认知和运动任务中皮质电路的顺序神经动力学(“神经序列”)。努力结合生物约束和戴尔原则将有助于阐明基础电路的神经表示和机制。我们训练了一个兴奋性-抑制性 RNN 以监督方式学习神经序列,并研究了训练网络的表示和动态吸引子。训练后的 RNN 对各种输入信号的触发序列具有鲁棒性,并对序列表示进行时间扭曲输入的内插。有趣的是,当 RNN 的演化超过单个序列的持续时间时,学习到的序列可以周期性地重复。具有生长或阻尼模式的学习递归连接矩阵的特征谱,以及 RNN 的非线性,足以产生极限环吸引子。我们进一步研究了在训练 RNN 以学习两个序列时动态吸引子的稳定性。总之,我们的结果为理解兴奋性-抑制性 RNN 中的神经序列表示提供了一个通用框架。