PMI Lab, School of Physics, Sun Yat-sen University, Guangzhou 510275, People's Republic of China.
Guangdong Provincial Key Laboratory of Magnetoelectric Physics and Devices, Sun Yat-sen University, Guangzhou 510275, People's Republic of China.
Phys Rev E. 2023 Feb;107(2-1):024307. doi: 10.1103/PhysRevE.107.024307.
Recurrent neural networks are widely used for modeling spatiotemporal sequences in both nature language processing and neural population dynamics. However, understanding the temporal credit assignment is hard. Here, we propose that each individual connection in the recurrent computation is modeled by a spike and slab distribution, rather than a precise weight value. We then derive the mean-field algorithm to train the network at the ensemble level. The method is then applied to classify handwritten digits when pixels are read in sequence, and to the multisensory integration task that is a fundamental cognitive function of animals. Our model reveals important connections that determine the overall performance of the network. The model also shows how spatiotemporal information is processed through the hyperparameters of the distribution, and moreover reveals distinct types of emergent neural selectivity. To provide a mechanistic analysis of the ensemble learning, we first derive an analytic solution of the learning at the infinitely large network limit. We then carry out a low-dimensional projection of both neural and synaptic dynamics, analyze symmetry breaking in the parameter space, and finally demonstrate the role of stochastic plasticity in the recurrent computation. Therefore, our study sheds light on mechanisms of how weight uncertainty impacts the temporal credit assignment in recurrent neural networks from the ensemble perspective.
递归神经网络在自然语言处理和神经群体动力学中都被广泛用于建模时空序列。然而,理解时间信用分配是困难的。在这里,我们提出递归计算中的每个单独连接都由一个尖峰和板条分布建模,而不是一个精确的权重值。然后,我们推导出一种在总体水平上训练网络的平均场算法。该方法随后应用于手写数字的分类,这些数字是按顺序读取的像素,以及多感觉整合任务,这是动物的基本认知功能。我们的模型揭示了决定网络整体性能的重要连接。该模型还展示了如何通过分布的超参数来处理时空信息,并且还揭示了不同类型的涌现神经选择性。为了提供集合学习的机制分析,我们首先在无限大网络极限下推导出学习的解析解。然后,我们对神经和突触动力学进行低维投影,分析参数空间中的对称性破缺,最后证明随机可塑性在递归计算中的作用。因此,我们的研究从集合的角度揭示了权重不确定性如何影响递归神经网络中的时间信用分配的机制。