Jarne Cecilia, Caruso Mariano
Departmento de Ciencia y Tecnología, Universidad Nacional de Quilmes, Bernal, Argentina.
Center of Functionally Integrative Neuroscience, Department of Clinical Medicine, Aarhus University, Aarhus, Denmark.
Cogn Neurodyn. 2024 Jun;18(3):1323-1335. doi: 10.1007/s11571-023-09956-w. Epub 2023 Apr 6.
In order to comprehend and enhance models that describes various brain regions it is important to study the dynamics of trained recurrent neural networks. Including Dale's law in such models usually presents several challenges. However, this is an important aspect that allows computational models to better capture the characteristics of the brain. Here we present a framework to train networks using such constraint. Then we have used it to train them in simple decision making tasks. We characterized the eigenvalue distributions of the recurrent weight matrices of such networks. Interestingly, we discovered that the non-dominant eigenvalues of the recurrent weight matrix are distributed in a circle with a radius less than 1 for those whose initial condition before training was random normal and in a ring for those whose initial condition was random orthogonal. In both cases, the radius does not depend on the fraction of excitatory and inhibitory units nor the size of the network. Diminution of the radius, compared to networks trained without the constraint, has implications on the activity and dynamics that we discussed here.
The online version contains supplementary material available at 10.1007/s11571-023-09956-w.
为了理解和改进描述不同脑区的模型,研究经过训练的递归神经网络的动力学非常重要。在这类模型中纳入戴尔定律通常会带来几个挑战。然而,这是一个能让计算模型更好地捕捉大脑特征的重要方面。在此,我们提出一个使用这种约束来训练网络的框架。然后我们用它在简单决策任务中训练网络。我们对这类网络的递归权重矩阵的特征值分布进行了表征。有趣的是,我们发现,对于训练前初始条件为随机正态的网络,递归权重矩阵的非主导特征值呈半径小于1的圆分布;而对于初始条件为随机正交的网络,其非主导特征值呈环形分布。在这两种情况下,半径均不依赖于兴奋性和抑制性单元的比例,也不依赖于网络规模。与无此约束训练的网络相比,半径的减小对我们在此讨论的活动和动力学有影响。
在线版本包含可在10.1007/s11571-023-09956-w获取的补充材料。