Department of Electrical Engineering, Stanford University, Stanford, CA, USA.
Department of Neurosurgery, Stanford University, Stanford, CA, USA.
Nat Neurosci. 2024 Jul;27(7):1349-1363. doi: 10.1038/s41593-024-01668-6. Epub 2024 Jul 9.
Flexible computation is a hallmark of intelligent behavior. However, little is known about how neural networks contextually reconfigure for different computations. In the present work, we identified an algorithmic neural substrate for modular computation through the study of multitasking artificial recurrent neural networks. Dynamical systems analyses revealed learned computational strategies mirroring the modular subtask structure of the training task set. Dynamical motifs, which are recurring patterns of neural activity that implement specific computations through dynamics, such as attractors, decision boundaries and rotations, were reused across tasks. For example, tasks requiring memory of a continuous circular variable repurposed the same ring attractor. We showed that dynamical motifs were implemented by clusters of units when the unit activation function was restricted to be positive. Cluster lesions caused modular performance deficits. Motifs were reconfigured for fast transfer learning after an initial phase of learning. This work establishes dynamical motifs as a fundamental unit of compositional computation, intermediate between neuron and network. As whole-brain studies simultaneously record activity from multiple specialized systems, the dynamical motif framework will guide questions about specialization and generalization.
灵活计算是智能行为的标志。然而,人们对于神经网络如何根据上下文重新配置以进行不同的计算知之甚少。在本工作中,我们通过研究多任务人工递归神经网络,确定了用于模块化计算的算法神经基质。动力系统分析揭示了学习的计算策略反映了训练任务集的模块化子任务结构。动力模式是通过动态实现特定计算的神经活动的重复模式,例如吸引子、决策边界和旋转,在任务之间被重复使用。例如,需要记忆连续循环变量的任务重新使用相同的环形吸引子。我们表明,当单元激活函数被限制为正时,通过单元集群实现了动力模式。簇损伤导致模块化性能缺陷。在初始学习阶段之后,模式会快速进行转移学习。这项工作确立了动力模式作为组合计算的基本单元,处于神经元和网络之间。随着全脑研究同时记录来自多个专门系统的活动,动力模式框架将指导关于专业化和泛化的问题。