Suppr超能文献

具有兴奋性-抑制性约束训练的循环神经网络(RNN)的特征值谱和动力学效应。

Effect in the spectra of eigenvalues and dynamics of RNNs trained with excitatory-inhibitory constraint.

作者信息

Jarne Cecilia, Caruso Mariano

机构信息

Departmento de Ciencia y Tecnología, Universidad Nacional de Quilmes, Bernal, Argentina.

Center of Functionally Integrative Neuroscience, Department of Clinical Medicine, Aarhus University, Aarhus, Denmark.

出版信息

Cogn Neurodyn. 2024 Jun;18(3):1323-1335. doi: 10.1007/s11571-023-09956-w. Epub 2023 Apr 6.

Abstract

UNLABELLED

In order to comprehend and enhance models that describes various brain regions it is important to study the dynamics of trained recurrent neural networks. Including Dale's law in such models usually presents several challenges. However, this is an important aspect that allows computational models to better capture the characteristics of the brain. Here we present a framework to train networks using such constraint. Then we have used it to train them in simple decision making tasks. We characterized the eigenvalue distributions of the recurrent weight matrices of such networks. Interestingly, we discovered that the non-dominant eigenvalues of the recurrent weight matrix are distributed in a circle with a radius less than 1 for those whose initial condition before training was random normal and in a ring for those whose initial condition was random orthogonal. In both cases, the radius does not depend on the fraction of excitatory and inhibitory units nor the size of the network. Diminution of the radius, compared to networks trained without the constraint, has implications on the activity and dynamics that we discussed here.

SUPPLEMENTARY INFORMATION

The online version contains supplementary material available at 10.1007/s11571-023-09956-w.

摘要

未标注

为了理解和改进描述不同脑区的模型,研究经过训练的递归神经网络的动力学非常重要。在这类模型中纳入戴尔定律通常会带来几个挑战。然而,这是一个能让计算模型更好地捕捉大脑特征的重要方面。在此,我们提出一个使用这种约束来训练网络的框架。然后我们用它在简单决策任务中训练网络。我们对这类网络的递归权重矩阵的特征值分布进行了表征。有趣的是,我们发现,对于训练前初始条件为随机正态的网络,递归权重矩阵的非主导特征值呈半径小于1的圆分布;而对于初始条件为随机正交的网络,其非主导特征值呈环形分布。在这两种情况下,半径均不依赖于兴奋性和抑制性单元的比例,也不依赖于网络规模。与无此约束训练的网络相比,半径的减小对我们在此讨论的活动和动力学有影响。

补充信息

在线版本包含可在10.1007/s11571-023-09956-w获取的补充材料。

相似文献

1
Effect in the spectra of eigenvalues and dynamics of RNNs trained with excitatory-inhibitory constraint.
Cogn Neurodyn. 2024 Jun;18(3):1323-1335. doi: 10.1007/s11571-023-09956-w. Epub 2023 Apr 6.
2
Functional Implications of Dale's Law in Balanced Neuronal Network Dynamics and Decision Making.
Front Neurosci. 2022 Feb 28;16:801847. doi: 10.3389/fnins.2022.801847. eCollection 2022.
3
Different eigenvalue distributions encode the same temporal tasks in recurrent neural networks.
Cogn Neurodyn. 2023 Feb;17(1):257-275. doi: 10.1007/s11571-022-09802-5. Epub 2022 Apr 20.
4
Training Excitatory-Inhibitory Recurrent Neural Networks for Cognitive Tasks: A Simple and Flexible Framework.
PLoS Comput Biol. 2016 Feb 29;12(2):e1004792. doi: 10.1371/journal.pcbi.1004792. eCollection 2016 Feb.
5
Eigenvalue spectra of random matrices for neural networks.
Phys Rev Lett. 2006 Nov 3;97(18):188104. doi: 10.1103/PhysRevLett.97.188104. Epub 2006 Nov 2.
7
Training dynamically balanced excitatory-inhibitory networks.
PLoS One. 2019 Aug 8;14(8):e0220547. doi: 10.1371/journal.pone.0220547. eCollection 2019.
8
CONSTRUCTING BIOLOGICALLY CONSTRAINED RNNS VIA DALE'S BACKPROP AND TOPOLOGICALLY-INFORMED PRUNING.
bioRxiv. 2025 Jan 13:2025.01.09.632231. doi: 10.1101/2025.01.09.632231.
9
Symmetries Constrain Dynamics in a Family of Balanced Neural Networks.
J Math Neurosci. 2017 Oct 10;7(1):10. doi: 10.1186/s13408-017-0052-6.
10
Function approximation in inhibitory networks.
Neural Netw. 2016 May;77:95-106. doi: 10.1016/j.neunet.2016.01.010. Epub 2016 Feb 18.

引用本文的文献

1
Exploring Flip Flop memories and beyond: training Recurrent Neural Networks with key insights.
Front Syst Neurosci. 2024 Mar 27;18:1269190. doi: 10.3389/fnsys.2024.1269190. eCollection 2024.

本文引用的文献

1
Different eigenvalue distributions encode the same temporal tasks in recurrent neural networks.
Cogn Neurodyn. 2023 Feb;17(1):257-275. doi: 10.1007/s11571-022-09802-5. Epub 2022 Apr 20.
2
Functional Implications of Dale's Law in Balanced Neuronal Network Dynamics and Decision Making.
Front Neurosci. 2022 Feb 28;16:801847. doi: 10.3389/fnins.2022.801847. eCollection 2022.
3
Encoding time in neural dynamic regimes with distinct computational tradeoffs.
PLoS Comput Biol. 2022 Mar 3;18(3):e1009271. doi: 10.1371/journal.pcbi.1009271. eCollection 2022 Mar.
5
A geometric framework for understanding dynamic information integration in context-dependent computation.
iScience. 2021 Jul 30;24(8):102919. doi: 10.1016/j.isci.2021.102919. eCollection 2021 Aug 20.
6
Optimal learning with excitatory and inhibitory synapses.
PLoS Comput Biol. 2020 Dec 28;16(12):e1008536. doi: 10.1371/journal.pcbi.1008536. eCollection 2020 Dec.
7
Computation Through Neural Population Dynamics.
Annu Rev Neurosci. 2020 Jul 8;43:249-275. doi: 10.1146/annurev-neuro-092619-094115.
9
Understanding the computation of time using neural network models.
Proc Natl Acad Sci U S A. 2020 May 12;117(19):10530-10540. doi: 10.1073/pnas.1921609117. Epub 2020 Apr 27.
10
Coding with transient trajectories in recurrent neural networks.
PLoS Comput Biol. 2020 Feb 13;16(2):e1007655. doi: 10.1371/journal.pcbi.1007655. eCollection 2020 Feb.

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验