• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

通过神经调节实现循环神经网络的结构化灵活性。

Structured flexibility in recurrent neural networks via neuromodulation.

作者信息

Costacurta Julia C, Bhandarkar Shaunak, Zoltowski David M, Linderman Scott W

机构信息

Wu Tsai Neurosciences Institute, Stanford, CA, USA.

Department of Electrical Engineering, Stanford, CA, USA.

出版信息

bioRxiv. 2024 Jul 26:2024.07.26.605315. doi: 10.1101/2024.07.26.605315.

DOI:10.1101/2024.07.26.605315
PMID:39091788
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC11291173/
Abstract

The goal of theoretical neuroscience is to develop models that help us better understand biological intelligence. Such models range broadly in complexity and biological detail. For example, task-optimized recurrent neural networks (RNNs) have generated hypotheses about how the brain may perform various computations, but these models typically assume a fixed weight matrix representing the synaptic connectivity between neurons. From decades of neuroscience research, we know that synaptic weights are constantly changing, controlled in part by chemicals such as neuromodulators. In this work we explore the computational implications of synaptic gain scaling, a form of neuromodulation, using task-optimized low-rank RNNs. In our neuromodulated RNN (NM-RNN) model, a neuromodulatory subnetwork outputs a low-dimensional neuromodulatory signal that dynamically scales the low-rank recurrent weights of an output-generating RNN. In empirical experiments, we find that the structured flexibility in the NM-RNN allows it to both train and generalize with a higher degree of accuracy than low-rank RNNs on a set of canonical tasks. Additionally, via theoretical analyses we show how neuromodulatory gain scaling endows networks with gating mechanisms commonly found in artificial RNNs. We end by analyzing the low-rank dynamics of trained NM-RNNs, to show how task computations are distributed.

摘要

理论神经科学的目标是开发有助于我们更好地理解生物智能的模型。这类模型在复杂性和生物学细节方面差异很大。例如,任务优化的循环神经网络(RNN)已经产生了关于大脑如何执行各种计算的假设,但这些模型通常假设存在一个固定的权重矩阵来表示神经元之间的突触连接。从数十年的神经科学研究中我们了解到,突触权重在不断变化,部分受神经调质等化学物质的控制。在这项工作中,我们使用任务优化的低秩RNN探索突触增益缩放(一种神经调制形式)的计算含义。在我们的神经调制RNN(NM - RNN)模型中,一个神经调制子网输出一个低维神经调制信号,该信号动态缩放生成输出的RNN的低秩循环权重。在实证实验中,我们发现NM - RNN中的结构化灵活性使其在一组标准任务上比低秩RNN具有更高的训练和泛化精度。此外,通过理论分析,我们展示了神经调制增益缩放如何赋予网络人工RNN中常见的门控机制。我们通过分析训练后的NM - RNN的低秩动态来结束本文,以展示任务计算是如何分布的。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2c5b/11291173/9666f81dcc01/nihpp-2024.07.26.605315v1-f0005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2c5b/11291173/384b4f2859e4/nihpp-2024.07.26.605315v1-f0001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2c5b/11291173/2c09f2e3c2c0/nihpp-2024.07.26.605315v1-f0002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2c5b/11291173/39c85737cc76/nihpp-2024.07.26.605315v1-f0003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2c5b/11291173/9f0406709ff2/nihpp-2024.07.26.605315v1-f0004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2c5b/11291173/9666f81dcc01/nihpp-2024.07.26.605315v1-f0005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2c5b/11291173/384b4f2859e4/nihpp-2024.07.26.605315v1-f0001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2c5b/11291173/2c09f2e3c2c0/nihpp-2024.07.26.605315v1-f0002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2c5b/11291173/39c85737cc76/nihpp-2024.07.26.605315v1-f0003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2c5b/11291173/9f0406709ff2/nihpp-2024.07.26.605315v1-f0004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2c5b/11291173/9666f81dcc01/nihpp-2024.07.26.605315v1-f0005.jpg

相似文献

1
Structured flexibility in recurrent neural networks via neuromodulation.通过神经调节实现循环神经网络的结构化灵活性。
bioRxiv. 2024 Jul 26:2024.07.26.605315. doi: 10.1101/2024.07.26.605315.
2
Considerations in using recurrent neural networks to probe neural dynamics.使用循环神经网络探究神经动力学的注意事项。
J Neurophysiol. 2019 Dec 1;122(6):2504-2521. doi: 10.1152/jn.00467.2018. Epub 2019 Oct 16.
3
PsychRNN: An Accessible and Flexible Python Package for Training Recurrent Neural Network Models on Cognitive Tasks.PsychRNN:一个用于在认知任务上训练递归神经网络模型的易于访问和灵活的 Python 包。
eNeuro. 2021 Jan 15;8(1). doi: 10.1523/ENEURO.0427-20.2020. Print 2021 Jan-Feb.
4
Opening the black box: low-dimensional dynamics in high-dimensional recurrent neural networks.打开黑箱:高维递归神经网络中的低维动力学。
Neural Comput. 2013 Mar;25(3):626-49. doi: 10.1162/NECO_a_00409. Epub 2012 Dec 28.
5
Reverse engineering recurrent networks for sentiment classification reveals line attractor dynamics.用于情感分类的循环神经网络逆向工程揭示了线性吸引子动力学。
Adv Neural Inf Process Syst. 2019 Dec;32:15696-15705.
6
RNNCon: Contribution Coverage Testing for Stacked Recurrent Neural Networks.RNNCon:堆叠循环神经网络的贡献覆盖测试
Entropy (Basel). 2023 Mar 17;25(3):520. doi: 10.3390/e25030520.
7
Neural population dynamics of computing with synaptic modulations.突触调制计算的神经群体动力学。
Elife. 2023 Feb 23;12:e83035. doi: 10.7554/eLife.83035.
8
Training biologically plausible recurrent neural networks on cognitive tasks with long-term dependencies.在具有长期依赖关系的认知任务上训练具有生物学合理性的循环神经网络。
bioRxiv. 2023 Oct 10:2023.10.10.561588. doi: 10.1101/2023.10.10.561588.
9
Recognizing recurrent neural networks (rRNN): Bayesian inference for recurrent neural networks.认识递归神经网络(rRNN):递归神经网络的贝叶斯推理。
Biol Cybern. 2012 Jul;106(4-5):201-17. doi: 10.1007/s00422-012-0490-x. Epub 2012 May 12.
10
Winning the Lottery With Neural Connectivity Constraints: Faster Learning Across Cognitive Tasks With Spatially Constrained Sparse RNNs.通过神经连接约束赢得彩票:使用空间约束稀疏循环神经网络在认知任务中实现更快学习。
Neural Comput. 2023 Oct 10;35(11):1850-1869. doi: 10.1162/neco_a_01613.

本文引用的文献

1
Flexible multitask computation in recurrent networks utilizes shared dynamical motifs.递归网络中的灵活多任务计算利用了共享的动态模式。
Nat Neurosci. 2024 Jul;27(7):1349-1363. doi: 10.1038/s41593-024-01668-6. Epub 2024 Jul 9.
2
Simple synaptic modulations implement diverse novelty computations.简单的突触调制实现了多样化的新颖性计算。
Cell Rep. 2024 May 28;43(5):114188. doi: 10.1016/j.celrep.2024.114188. Epub 2024 May 6.
3
Neural population dynamics of computing with synaptic modulations.突触调制计算的神经群体动力学。
Elife. 2023 Feb 23;12:e83035. doi: 10.7554/eLife.83035.
4
Parametric control of flexible timing through low-dimensional neural manifolds.通过低维神经流形进行灵活定时的参数控制。
Neuron. 2023 Mar 1;111(5):739-753.e8. doi: 10.1016/j.neuron.2022.12.016. Epub 2023 Jan 13.
5
The role of population structure in computations through neural dynamics.人口结构在神经动力学计算中的作用。
Nat Neurosci. 2022 Jun;25(6):783-794. doi: 10.1038/s41593-022-01088-4. Epub 2022 Jun 6.
6
Slowly evolving dopaminergic activity modulates the moment-to-moment probability of reward-related self-timed movements.缓慢演变的多巴胺能活动调节与奖励相关的自我定时运动的即时概率。
Elife. 2021 Dec 23;10:e62583. doi: 10.7554/eLife.62583.
7
Cell-type-specific neuromodulation guides synaptic credit assignment in a spiking neural network.细胞类型特异性神经调节指导脉冲神经网络中的突触信用分配。
Proc Natl Acad Sci U S A. 2021 Dec 21;118(51). doi: 10.1073/pnas.2111821118.
8
Shaping Dynamics With Multiple Populations in Low-Rank Recurrent Networks.低秩递归网络中多群体的动态塑造。
Neural Comput. 2021 May 13;33(6):1572-1615. doi: 10.1162/neco_a_01381.
9
Stimulus-specific hypothalamic encoding of a persistent defensive state.刺激特异性下丘脑对持续防御状态的编码。
Nature. 2020 Oct;586(7831):730-734. doi: 10.1038/s41586-020-2728-4. Epub 2020 Sep 16.
10
Computation Through Neural Population Dynamics.通过神经群体动力学进行计算。
Annu Rev Neurosci. 2020 Jul 8;43:249-275. doi: 10.1146/annurev-neuro-092619-094115.