DePasquale Brian, Cueva Christopher J, Rajan Kanaka, Escola G Sean, Abbott L F
Department of Neuroscience, Zuckerman Institute, Columbia University, New York, NY, United States of America.
Joseph Henry Laboratories of Physics and Lewis-Sigler Institute for Integrative Genomics, Princeton University, Princeton, NJ, United States of America.
PLoS One. 2018 Feb 7;13(2):e0191527. doi: 10.1371/journal.pone.0191527. eCollection 2018.
Trained recurrent networks are powerful tools for modeling dynamic neural computations. We present a target-based method for modifying the full connectivity matrix of a recurrent network to train it to perform tasks involving temporally complex input/output transformations. The method introduces a second network during training to provide suitable "target" dynamics useful for performing the task. Because it exploits the full recurrent connectivity, the method produces networks that perform tasks with fewer neurons and greater noise robustness than traditional least-squares (FORCE) approaches. In addition, we show how introducing additional input signals into the target-generating network, which act as task hints, greatly extends the range of tasks that can be learned and provides control over the complexity and nature of the dynamics of the trained, task-performing network.
训练有素的循环网络是用于对动态神经计算进行建模的强大工具。我们提出了一种基于目标的方法,用于修改循环网络的全连接矩阵,以训练其执行涉及时间复杂输入/输出变换的任务。该方法在训练期间引入第二个网络,以提供适用于执行任务的合适“目标”动态。由于它利用了全循环连接,该方法产生的网络比传统的最小二乘法(FORCE)方法使用更少的神经元来执行任务,并且具有更高的噪声鲁棒性。此外,我们展示了如何将额外的输入信号引入目标生成网络,这些信号作为任务提示,极大地扩展了可以学习的任务范围,并提供了对训练后的任务执行网络动态的复杂性和性质的控制。