Laboratoire de Physique de l'Ecole Normale Supérieure PSLand CNRS UMR 8023, Sorbonne Université, 75005 Paris, France,
Neural Comput. 2021 Mar 26;33(4):1063-1112. doi: 10.1162/neco_a_01366.
We study the learning dynamics and the representations emerging in recurrent neural networks (RNNs) trained to integrate one or multiple temporal signals. Combining analytical and numerical investigations, we characterize the conditions under which an RNN with n neurons learns to integrate D(≪n) scalar signals of arbitrary duration. We show, for linear, ReLU, and sigmoidal neurons, that the internal state lives close to a D-dimensional manifold, whose shape is related to the activation function. Each neuron therefore carries, to various degrees, information about the value of all integrals. We discuss the deep analogy between our results and the concept of mixed selectivity forged by computational neuroscientists to interpret cortical recordings.
我们研究了在被训练来整合一个或多个时间信号的递归神经网络 (RNN) 中出现的学习动态和表示。通过结合分析和数值研究,我们描述了具有 n 个神经元的 RNN 学习整合任意持续时间的 D(≪n)个标量信号的条件。我们表明,对于线性、ReLU 和 sigmoidal 神经元,内部状态接近于一个 D 维流形,其形状与激活函数有关。因此,每个神经元在不同程度上携带关于所有积分值的信息。我们讨论了我们的结果与计算神经科学家为解释皮质记录而提出的混合选择性概念之间的深刻类比。