• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

在具有长期依赖关系的认知任务上训练具有生物学合理性的循环神经网络。

Training biologically plausible recurrent neural networks on cognitive tasks with long-term dependencies.

作者信息

Soo Wayne W M, Goudar Vishwa, Wang Xiao-Jing

机构信息

Department of Engineering, University of Cambridge.

Center for Neural Science, New York University.

出版信息

bioRxiv. 2023 Oct 10:2023.10.10.561588. doi: 10.1101/2023.10.10.561588.

DOI:10.1101/2023.10.10.561588
PMID:37873445
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC10592728/
Abstract

Training recurrent neural networks (RNNs) has become a go-to approach for generating and evaluating mechanistic neural hypotheses for cognition. The ease and efficiency of training RNNs with backpropagation through time and the availability of robustly supported deep learning libraries has made RNN modeling more approachable and accessible to neuroscience. Yet, a major technical hindrance remains. Cognitive processes such as working memory and decision making involve neural population dynamics over a long period of time within a behavioral trial and across trials. It is difficult to train RNNs to accomplish tasks where neural representations and dynamics have long temporal dependencies without gating mechanisms such as LSTMs or GRUs which currently lack experimental support and prohibit direct comparison between RNNs and biological neural circuits. We tackled this problem based on the idea of specialized skip-connections through time to support the emergence of task-relevant dynamics, and subsequently reinstitute biological plausibility by reverting to the original architecture. We show that this approach enables RNNs to successfully learn cognitive tasks that prove impractical if not impossible to learn using conventional methods. Over numerous tasks considered here, we achieve less training steps and shorter wall-clock times, particularly in tasks that require learning long-term dependencies via temporal integration over long timescales or maintaining a memory of past events in hidden-states. Our methods expand the range of experimental tasks that biologically plausible RNN models can learn, thereby supporting the development of theory for the emergent neural mechanisms of computations involving long-term dependencies.

摘要

训练循环神经网络(RNNs)已成为生成和评估认知机制神经假说的常用方法。通过时间反向传播训练RNNs的简便性和效率,以及强大的深度学习库的可用性,使得RNN建模对神经科学来说更易于接近和使用。然而,一个主要的技术障碍仍然存在。诸如工作记忆和决策等认知过程涉及行为试验内以及跨试验的长时间神经群体动态。在没有诸如长短期记忆(LSTM)或门控循环单元(GRU)等门控机制的情况下,训练RNNs完成神经表征和动态具有长时间依赖性的任务是困难的,而目前这些机制缺乏实验支持,并且禁止在RNNs和生物神经回路之间进行直接比较。我们基于通过时间的专门跳跃连接的想法来解决这个问题,以支持与任务相关的动态的出现,随后通过恢复到原始架构来恢复生物合理性。我们表明,这种方法使RNNs能够成功学习认知任务,而使用传统方法学习这些任务即使不是不可能也是不切实际的。在此考虑的众多任务中,我们实现了更少的训练步骤和更短的实际运行时间,特别是在需要通过长时间尺度上的时间积分来学习长期依赖性或在隐藏状态中保持对过去事件的记忆的任务中。我们的方法扩展了具有生物合理性的RNN模型能够学习的实验任务范围,从而支持涉及长期依赖性的计算的新兴神经机制理论的发展。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f70e/10592728/91d5e4da8cba/nihpp-2023.10.10.561588v1-f0004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f70e/10592728/16d882481e85/nihpp-2023.10.10.561588v1-f0001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f70e/10592728/0e0eb1bf529b/nihpp-2023.10.10.561588v1-f0002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f70e/10592728/d2bee2ad336d/nihpp-2023.10.10.561588v1-f0003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f70e/10592728/91d5e4da8cba/nihpp-2023.10.10.561588v1-f0004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f70e/10592728/16d882481e85/nihpp-2023.10.10.561588v1-f0001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f70e/10592728/0e0eb1bf529b/nihpp-2023.10.10.561588v1-f0002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f70e/10592728/d2bee2ad336d/nihpp-2023.10.10.561588v1-f0003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f70e/10592728/91d5e4da8cba/nihpp-2023.10.10.561588v1-f0004.jpg

相似文献

1
Training biologically plausible recurrent neural networks on cognitive tasks with long-term dependencies.在具有长期依赖关系的认知任务上训练具有生物学合理性的循环神经网络。
bioRxiv. 2023 Oct 10:2023.10.10.561588. doi: 10.1101/2023.10.10.561588.
2
PsychRNN: An Accessible and Flexible Python Package for Training Recurrent Neural Network Models on Cognitive Tasks.PsychRNN:一个用于在认知任务上训练递归神经网络模型的易于访问和灵活的 Python 包。
eNeuro. 2021 Jan 15;8(1). doi: 10.1523/ENEURO.0427-20.2020. Print 2021 Jan-Feb.
3
A critical review of RNN and LSTM variants in hydrological time series predictions.对水文时间序列预测中循环神经网络(RNN)和长短期记忆网络(LSTM)变体的批判性综述。
MethodsX. 2024 Sep 12;13:102946. doi: 10.1016/j.mex.2024.102946. eCollection 2024 Dec.
4
Achieving Online Regression Performance of LSTMs With Simple RNNs.使用简单循环神经网络实现长短期记忆网络的在线回归性能
IEEE Trans Neural Netw Learn Syst. 2022 Dec;33(12):7632-7643. doi: 10.1109/TNNLS.2021.3086029. Epub 2022 Nov 30.
5
Training Excitatory-Inhibitory Recurrent Neural Networks for Cognitive Tasks: A Simple and Flexible Framework.用于认知任务的兴奋性-抑制性循环神经网络训练:一个简单灵活的框架。
PLoS Comput Biol. 2016 Feb 29;12(2):e1004792. doi: 10.1371/journal.pcbi.1004792. eCollection 2016 Feb.
6
Gated Orthogonal Recurrent Units: On Learning to Forget.门控正交循环单元:关于学习遗忘
Neural Comput. 2019 Apr;31(4):765-783. doi: 10.1162/neco_a_01174. Epub 2019 Feb 14.
7
Temporal-kernel recurrent neural networks.时间核递归神经网络。
Neural Netw. 2010 Mar;23(2):239-43. doi: 10.1016/j.neunet.2009.10.009. Epub 2009 Nov 5.
8
Subtraction Gates: Another Way to Learn Long-Term Dependencies in Recurrent Neural Networks.减法门控:循环神经网络中学习长期依赖关系的另一种方法。
IEEE Trans Neural Netw Learn Syst. 2022 Apr;33(4):1740-1751. doi: 10.1109/TNNLS.2020.3043752. Epub 2022 Apr 4.
9
Winning the Lottery With Neural Connectivity Constraints: Faster Learning Across Cognitive Tasks With Spatially Constrained Sparse RNNs.通过神经连接约束赢得彩票:使用空间约束稀疏循环神经网络在认知任务中实现更快学习。
Neural Comput. 2023 Oct 10;35(11):1850-1869. doi: 10.1162/neco_a_01613.
10
Biologically plausible learning in recurrent neural networks reproduces neural dynamics observed during cognitive tasks.循环神经网络中符合生物学原理的学习再现了认知任务期间观察到的神经动力学。
Elife. 2017 Feb 23;6:e20899. doi: 10.7554/eLife.20899.

本文引用的文献

1
Neural representational geometries reflect behavioral differences in monkeys and recurrent neural networks.神经表征几何结构反映了猴子和循环神经网络中的行为差异。
Nat Commun. 2024 Aug 1;15(1):6479. doi: 10.1038/s41467-024-50503-w.
2
Flexible multitask computation in recurrent networks utilizes shared dynamical motifs.递归网络中的灵活多任务计算利用了共享的动态模式。
Nat Neurosci. 2024 Jul;27(7):1349-1363. doi: 10.1038/s41593-024-01668-6. Epub 2024 Jul 9.
3
Optimal information loading into working memory explains dynamic coding in the prefrontal cortex.
最优信息加载到工作记忆中解释了前额叶皮层中的动态编码。
Proc Natl Acad Sci U S A. 2023 Nov 28;120(48):e2307991120. doi: 10.1073/pnas.2307991120. Epub 2023 Nov 20.
4
Winning the Lottery With Neural Connectivity Constraints: Faster Learning Across Cognitive Tasks With Spatially Constrained Sparse RNNs.通过神经连接约束赢得彩票:使用空间约束稀疏循环神经网络在认知任务中实现更快学习。
Neural Comput. 2023 Oct 10;35(11):1850-1869. doi: 10.1162/neco_a_01613.
5
Schema formation in a neural population subspace underlies learning-to-learn in flexible sensorimotor problem-solving.在灵活的感觉运动问题解决中,神经群体子空间中的模式形成是学习如何学习的基础。
Nat Neurosci. 2023 May;26(5):879-890. doi: 10.1038/s41593-023-01293-9. Epub 2023 Apr 6.
6
Excitatory-inhibitory recurrent dynamics produce robust visual grids and stable attractors.兴奋-抑制的循环动力学产生了强大的视觉网格和稳定的吸引子。
Cell Rep. 2022 Dec 13;41(11):111777. doi: 10.1016/j.celrep.2022.111777.
7
Recurrent Connections in the Primate Ventral Visual Stream Mediate a Trade-Off Between Task Performance and Network Size During Core Object Recognition.灵长类动物腹侧视觉流中的重复连接在核心物体识别过程中,在任务表现和网络规模之间进行权衡。
Neural Comput. 2022 Jul 14;34(8):1652-1675. doi: 10.1162/neco_a_01506.
8
The role of population structure in computations through neural dynamics.人口结构在神经动力学计算中的作用。
Nat Neurosci. 2022 Jun;25(6):783-794. doi: 10.1038/s41593-022-01088-4. Epub 2022 Jun 6.
9
Motor cortex activity across movement speeds is predicted by network-level strategies for generating muscle activity.运动速度下的运动皮层活动由生成肌肉活动的网络级策略预测。
Elife. 2022 May 27;11:e67620. doi: 10.7554/eLife.67620.
10
Rotational dynamics in motor cortex are consistent with a feedback controller.运动皮层中的旋转动力学与反馈控制器一致。
Elife. 2021 Nov 3;10:e67256. doi: 10.7554/eLife.67256.