Sainsbury Wellcome Centre for Neural Circuits and Behaviour, University College London, London W1T 4JG, U.K.
Neural Comput. 2022 Jul 14;34(8):1790-1811. doi: 10.1162/neco_a_01511.
Neural computations can be framed as dynamical processes, whereby the structure of the dynamics within a neural network is a direct reflection of the computations that the network performs. A key step in generating mechanistic interpretations within this computation through dynamics framework is to establish the link among network connectivity, dynamics, and computation. This link is only partly understood. Recent work has focused on producing algorithms for engineering artificial recurrent neural networks (RNN) with dynamics targeted to a specific goal manifold. Some of these algorithms require only a set of vectors tangent to the target manifold to be computed and thus provide a general method that can be applied to a diverse set of problems. Nevertheless, computing such vectors for an arbitrary manifold in a high-dimensional state space remains highly challenging, which in practice limits the applicability of this approach. Here we demonstrate how topology and differential geometry can be leveraged to simplify this task by first computing tangent vectors on a low-dimensional topological manifold and then embedding these in state space. The simplicity of this procedure greatly facilitates the creation of manifold-targeted RNNs, as well as the process of designing task-solving, on-manifold dynamics. This new method should enable the application of network engineering-based approaches to a wide set of problems in neuroscience and machine learning. Our description of how fundamental concepts from differential geometry can be mapped onto different aspects of neural dynamics is a further demonstration of how the language of differential geometry can enrich the conceptual framework for describing neural dynamics and computation.
神经计算可以被构造成动态过程,其中神经网络内的动力学结构直接反映了网络执行的计算。在通过动力学框架在这种计算中生成机制解释的关键步骤是在网络连接、动力学和计算之间建立联系。这种联系只是部分理解的。最近的工作集中在产生算法,以工程具有针对特定目标流形的动力学的人工递归神经网络 (RNN)。其中一些算法只需要计算一组切向于目标流形的向量,因此提供了一种可以应用于各种问题的通用方法。然而,在高维状态空间中计算任意流形的这样的向量仍然具有很大的挑战性,这在实践中限制了这种方法的适用性。在这里,我们展示了如何利用拓扑和微分几何通过首先在低维拓扑流形上计算切向量,然后将这些向量嵌入到状态空间中,从而简化这项任务。这个过程的简单性极大地促进了针对流形的 RNN 的创建,以及在流形上解决任务的动力学的设计过程。这种新方法应该能够将基于网络工程的方法应用于神经科学和机器学习中的广泛问题。我们描述了微分几何的基本概念如何映射到神经动力学的不同方面,这进一步证明了微分几何的语言如何丰富描述神经动力学和计算的概念框架。