• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

相似文献

1
Effect in the spectra of eigenvalues and dynamics of RNNs trained with excitatory-inhibitory constraint.具有兴奋性-抑制性约束训练的循环神经网络(RNN)的特征值谱和动力学效应。
Cogn Neurodyn. 2024 Jun;18(3):1323-1335. doi: 10.1007/s11571-023-09956-w. Epub 2023 Apr 6.
2
Functional Implications of Dale's Law in Balanced Neuronal Network Dynamics and Decision Making.戴尔定律在平衡神经元网络动力学及决策中的功能意义
Front Neurosci. 2022 Feb 28;16:801847. doi: 10.3389/fnins.2022.801847. eCollection 2022.
3
Different eigenvalue distributions encode the same temporal tasks in recurrent neural networks.不同的特征值分布在循环神经网络中编码相同的时间任务。
Cogn Neurodyn. 2023 Feb;17(1):257-275. doi: 10.1007/s11571-022-09802-5. Epub 2022 Apr 20.
4
Training Excitatory-Inhibitory Recurrent Neural Networks for Cognitive Tasks: A Simple and Flexible Framework.用于认知任务的兴奋性-抑制性循环神经网络训练:一个简单灵活的框架。
PLoS Comput Biol. 2016 Feb 29;12(2):e1004792. doi: 10.1371/journal.pcbi.1004792. eCollection 2016 Feb.
5
Eigenvalue spectra of random matrices for neural networks.神经网络随机矩阵的特征值谱
Phys Rev Lett. 2006 Nov 3;97(18):188104. doi: 10.1103/PhysRevLett.97.188104. Epub 2006 Nov 2.
6
Stimulus-Driven and Spontaneous Dynamics in Excitatory-Inhibitory Recurrent Neural Networks for Sequence Representation.兴奋性抑制性递归神经网络中的刺激驱动和自发动力学用于序列表示。
Neural Comput. 2021 Sep 16;33(10):2603-2645. doi: 10.1162/neco_a_01418.
7
Training dynamically balanced excitatory-inhibitory networks.训练动态平衡的兴奋-抑制网络。
PLoS One. 2019 Aug 8;14(8):e0220547. doi: 10.1371/journal.pone.0220547. eCollection 2019.
8
CONSTRUCTING BIOLOGICALLY CONSTRAINED RNNS VIA DALE'S BACKPROP AND TOPOLOGICALLY-INFORMED PRUNING.通过戴尔反向传播和拓扑信息剪枝构建生物约束循环神经网络
bioRxiv. 2025 Jan 13:2025.01.09.632231. doi: 10.1101/2025.01.09.632231.
9
Symmetries Constrain Dynamics in a Family of Balanced Neural Networks.对称性在一类平衡神经网络中约束动力学。
J Math Neurosci. 2017 Oct 10;7(1):10. doi: 10.1186/s13408-017-0052-6.
10
Function approximation in inhibitory networks.抑制性网络中的函数逼近
Neural Netw. 2016 May;77:95-106. doi: 10.1016/j.neunet.2016.01.010. Epub 2016 Feb 18.

引用本文的文献

1
Exploring Flip Flop memories and beyond: training Recurrent Neural Networks with key insights.探索触发器存储器及其他:利用关键见解训练递归神经网络
Front Syst Neurosci. 2024 Mar 27;18:1269190. doi: 10.3389/fnsys.2024.1269190. eCollection 2024.

本文引用的文献

1
Different eigenvalue distributions encode the same temporal tasks in recurrent neural networks.不同的特征值分布在循环神经网络中编码相同的时间任务。
Cogn Neurodyn. 2023 Feb;17(1):257-275. doi: 10.1007/s11571-022-09802-5. Epub 2022 Apr 20.
2
Functional Implications of Dale's Law in Balanced Neuronal Network Dynamics and Decision Making.戴尔定律在平衡神经元网络动力学及决策中的功能意义
Front Neurosci. 2022 Feb 28;16:801847. doi: 10.3389/fnins.2022.801847. eCollection 2022.
3
Encoding time in neural dynamic regimes with distinct computational tradeoffs.用具有不同计算权衡的神经动力学状态来编码时间。
PLoS Comput Biol. 2022 Mar 3;18(3):e1009271. doi: 10.1371/journal.pcbi.1009271. eCollection 2022 Mar.
4
Stimulus-Driven and Spontaneous Dynamics in Excitatory-Inhibitory Recurrent Neural Networks for Sequence Representation.兴奋性抑制性递归神经网络中的刺激驱动和自发动力学用于序列表示。
Neural Comput. 2021 Sep 16;33(10):2603-2645. doi: 10.1162/neco_a_01418.
5
A geometric framework for understanding dynamic information integration in context-dependent computation.用于理解上下文相关计算中动态信息整合的几何框架。
iScience. 2021 Jul 30;24(8):102919. doi: 10.1016/j.isci.2021.102919. eCollection 2021 Aug 20.
6
Optimal learning with excitatory and inhibitory synapses.最优的兴奋与抑制突触学习。
PLoS Comput Biol. 2020 Dec 28;16(12):e1008536. doi: 10.1371/journal.pcbi.1008536. eCollection 2020 Dec.
7
Computation Through Neural Population Dynamics.通过神经群体动力学进行计算。
Annu Rev Neurosci. 2020 Jul 8;43:249-275. doi: 10.1146/annurev-neuro-092619-094115.
8
Neural Trajectories in the Supplementary Motor Area and Motor Cortex Exhibit Distinct Geometries, Compatible with Different Classes of Computation.补充运动区和运动皮层中的神经轨迹表现出不同的几何形状,与不同类别的计算兼容。
Neuron. 2020 Aug 19;107(4):745-758.e6. doi: 10.1016/j.neuron.2020.05.020. Epub 2020 Jun 8.
9
Understanding the computation of time using neural network models.理解神经网络模型中的时间计算。
Proc Natl Acad Sci U S A. 2020 May 12;117(19):10530-10540. doi: 10.1073/pnas.1921609117. Epub 2020 Apr 27.
10
Coding with transient trajectories in recurrent neural networks.基于递归神经网络中的瞬时轨迹进行编码。
PLoS Comput Biol. 2020 Feb 13;16(2):e1007655. doi: 10.1371/journal.pcbi.1007655. eCollection 2020 Feb.

具有兴奋性-抑制性约束训练的循环神经网络(RNN)的特征值谱和动力学效应。

Effect in the spectra of eigenvalues and dynamics of RNNs trained with excitatory-inhibitory constraint.

作者信息

Jarne Cecilia, Caruso Mariano

机构信息

Departmento de Ciencia y Tecnología, Universidad Nacional de Quilmes, Bernal, Argentina.

Center of Functionally Integrative Neuroscience, Department of Clinical Medicine, Aarhus University, Aarhus, Denmark.

出版信息

Cogn Neurodyn. 2024 Jun;18(3):1323-1335. doi: 10.1007/s11571-023-09956-w. Epub 2023 Apr 6.

DOI:10.1007/s11571-023-09956-w
PMID:38826641
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC11143133/
Abstract

UNLABELLED

In order to comprehend and enhance models that describes various brain regions it is important to study the dynamics of trained recurrent neural networks. Including Dale's law in such models usually presents several challenges. However, this is an important aspect that allows computational models to better capture the characteristics of the brain. Here we present a framework to train networks using such constraint. Then we have used it to train them in simple decision making tasks. We characterized the eigenvalue distributions of the recurrent weight matrices of such networks. Interestingly, we discovered that the non-dominant eigenvalues of the recurrent weight matrix are distributed in a circle with a radius less than 1 for those whose initial condition before training was random normal and in a ring for those whose initial condition was random orthogonal. In both cases, the radius does not depend on the fraction of excitatory and inhibitory units nor the size of the network. Diminution of the radius, compared to networks trained without the constraint, has implications on the activity and dynamics that we discussed here.

SUPPLEMENTARY INFORMATION

The online version contains supplementary material available at 10.1007/s11571-023-09956-w.

摘要

未标注

为了理解和改进描述不同脑区的模型,研究经过训练的递归神经网络的动力学非常重要。在这类模型中纳入戴尔定律通常会带来几个挑战。然而,这是一个能让计算模型更好地捕捉大脑特征的重要方面。在此,我们提出一个使用这种约束来训练网络的框架。然后我们用它在简单决策任务中训练网络。我们对这类网络的递归权重矩阵的特征值分布进行了表征。有趣的是,我们发现,对于训练前初始条件为随机正态的网络,递归权重矩阵的非主导特征值呈半径小于1的圆分布;而对于初始条件为随机正交的网络,其非主导特征值呈环形分布。在这两种情况下,半径均不依赖于兴奋性和抑制性单元的比例,也不依赖于网络规模。与无此约束训练的网络相比,半径的减小对我们在此讨论的活动和动力学有影响。

补充信息

在线版本包含可在10.1007/s11571-023-09956-w获取的补充材料。