• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

循环神经网络中符合生物学原理的学习再现了认知任务期间观察到的神经动力学。

Biologically plausible learning in recurrent neural networks reproduces neural dynamics observed during cognitive tasks.

作者信息

Miconi Thomas

机构信息

The Neurosciences Institute, California, United States.

出版信息

Elife. 2017 Feb 23;6:e20899. doi: 10.7554/eLife.20899.

DOI:10.7554/eLife.20899
PMID:28230528
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC5398889/
Abstract

Neural activity during cognitive tasks exhibits complex dynamics that flexibly encode task-relevant variables. Chaotic recurrent networks, which spontaneously generate rich dynamics, have been proposed as a model of cortical computation during cognitive tasks. However, existing methods for training these networks are either biologically implausible, and/or require a continuous, real-time error signal to guide learning. Here we show that a biologically plausible learning rule can train such recurrent networks, guided solely by delayed, phasic rewards at the end of each trial. Networks endowed with this learning rule can successfully learn nontrivial tasks requiring flexible (context-dependent) associations, memory maintenance, nonlinear mixed selectivities, and coordination among multiple outputs. The resulting networks replicate complex dynamics previously observed in animal cortex, such as dynamic encoding of task features and selective integration of sensory inputs. We conclude that recurrent neural networks offer a plausible model of cortical dynamics during both learning and performance of flexible behavior.

摘要

认知任务期间的神经活动表现出复杂的动力学,能够灵活地编码与任务相关的变量。混沌递归网络能够自发地产生丰富的动力学,已被提议作为认知任务期间皮质计算的一种模型。然而,现有的训练这些网络的方法要么在生物学上不合理,和/或需要连续的实时误差信号来指导学习。在这里,我们表明一种生物学上合理的学习规则可以训练这样的递归网络,仅由每个试验结束时的延迟、阶段性奖励来引导。赋予这种学习规则的网络能够成功学习需要灵活(依赖于上下文)关联、记忆维持、非线性混合选择性以及多个输出之间协调的重要任务。由此产生的网络复制了先前在动物皮质中观察到的复杂动力学,例如任务特征的动态编码和感觉输入的选择性整合。我们得出结论,递归神经网络为灵活行为的学习和执行过程中的皮质动力学提供了一个合理的模型。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1bb5/5398889/28f18ab1d6af/elife-20899-fig9.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1bb5/5398889/1534e8a80cbd/elife-20899-fig1.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1bb5/5398889/17a172344e28/elife-20899-fig2.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1bb5/5398889/72f5720eba84/elife-20899-fig3.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1bb5/5398889/8cbf5312f71d/elife-20899-fig4.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1bb5/5398889/388611ed1ce6/elife-20899-fig5.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1bb5/5398889/81ec519bddbb/elife-20899-fig6.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1bb5/5398889/7e50304d5457/elife-20899-fig7.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1bb5/5398889/882868945974/elife-20899-fig8.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1bb5/5398889/28f18ab1d6af/elife-20899-fig9.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1bb5/5398889/1534e8a80cbd/elife-20899-fig1.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1bb5/5398889/17a172344e28/elife-20899-fig2.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1bb5/5398889/72f5720eba84/elife-20899-fig3.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1bb5/5398889/8cbf5312f71d/elife-20899-fig4.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1bb5/5398889/388611ed1ce6/elife-20899-fig5.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1bb5/5398889/81ec519bddbb/elife-20899-fig6.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1bb5/5398889/7e50304d5457/elife-20899-fig7.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1bb5/5398889/882868945974/elife-20899-fig8.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1bb5/5398889/28f18ab1d6af/elife-20899-fig9.jpg

相似文献

1
Biologically plausible learning in recurrent neural networks reproduces neural dynamics observed during cognitive tasks.循环神经网络中符合生物学原理的学习再现了认知任务期间观察到的神经动力学。
Elife. 2017 Feb 23;6:e20899. doi: 10.7554/eLife.20899.
2
A learning rule for the emergence of stable dynamics and timing in recurrent networks.一种用于循环神经网络中稳定动力学和时间出现的学习规则。
J Neurophysiol. 2005 Oct;94(4):2275-83. doi: 10.1152/jn.01250.2004.
3
Learning Universal Computations with Spikes.通过脉冲学习通用计算。
PLoS Comput Biol. 2016 Jun 16;12(6):e1004895. doi: 10.1371/journal.pcbi.1004895. eCollection 2016 Jun.
4
Experience-induced neural circuits that achieve high capacity.实现高容量的经验诱导神经回路。
Neural Comput. 2009 Oct;21(10):2715-54. doi: 10.1162/neco.2009.08-08-851.
5
Emergence of complex computational structures from chaotic neural networks through reward-modulated Hebbian learning.混沌神经网络通过奖励调制的赫布学习产生复杂的计算结构。
Cereb Cortex. 2014 Mar;24(3):677-90. doi: 10.1093/cercor/bhs348. Epub 2012 Nov 11.
6
Synaptic dynamics: linear model and adaptation algorithm.突触动力学:线性模型与自适应算法。
Neural Netw. 2014 Aug;56:49-68. doi: 10.1016/j.neunet.2014.04.001. Epub 2014 Apr 28.
7
Learning precisely timed spikes.学习精确时间的尖峰。
Neuron. 2014 May 21;82(4):925-38. doi: 10.1016/j.neuron.2014.03.026. Epub 2014 Apr 24.
8
Chaotic neural dynamics facilitate probabilistic computations through sampling.混沌神经网络动力学通过采样促进概率计算。
Proc Natl Acad Sci U S A. 2024 Apr 30;121(18):e2312992121. doi: 10.1073/pnas.2312992121. Epub 2024 Apr 22.
9
Learning in neural networks by reinforcement of irregular spiking.通过强化不规则脉冲发放实现神经网络学习。
Phys Rev E Stat Nonlin Soft Matter Phys. 2004 Apr;69(4 Pt 1):041909. doi: 10.1103/PhysRevE.69.041909. Epub 2004 Apr 30.
10
A model of operant learning based on chaotically varying synaptic strength.基于突触强度混沌变化的操作性学习模型。
Neural Netw. 2018 Dec;108:114-127. doi: 10.1016/j.neunet.2018.08.006. Epub 2018 Aug 11.

引用本文的文献

1
Fine-Pruning: A biologically inspired algorithm for personalization of machine learning models.精细调整:一种受生物启发的用于机器学习模型个性化的算法。
Patterns (N Y). 2025 Apr 29;6(5):101242. doi: 10.1016/j.patter.2025.101242. eCollection 2025 May 9.
2
Evolutionary learning in neural networks by heterosynaptic plasticity.通过异突触可塑性实现神经网络中的进化学习。
iScience. 2025 Apr 3;28(5):112340. doi: 10.1016/j.isci.2025.112340. eCollection 2025 May 16.
3
CONSTRUCTING BIOLOGICALLY CONSTRAINED RNNS VIA DALE'S BACKPROP AND TOPOLOGICALLY-INFORMED PRUNING.

本文引用的文献

1
Intrinsically-generated fluctuating activity in excitatory-inhibitory networks.兴奋性-抑制性网络中内在产生的波动活动。
PLoS Comput Biol. 2017 Apr 24;13(4):e1005498. doi: 10.1371/journal.pcbi.1005498. eCollection 2017 Apr.
2
Recurrent Network Models of Sequence Generation and Memory.序列生成与记忆的循环网络模型。
Neuron. 2016 Apr 6;90(1):128-42. doi: 10.1016/j.neuron.2016.02.009. Epub 2016 Mar 10.
3
Training Excitatory-Inhibitory Recurrent Neural Networks for Cognitive Tasks: A Simple and Flexible Framework.用于认知任务的兴奋性-抑制性循环神经网络训练:一个简单灵活的框架。
通过戴尔反向传播和拓扑信息剪枝构建生物约束循环神经网络
bioRxiv. 2025 Jan 13:2025.01.09.632231. doi: 10.1101/2025.01.09.632231.
4
Random noise promotes slow heterogeneous synaptic dynamics important for robust working memory computation.随机噪声促进缓慢的异质性突触动力学,这对强大的工作记忆计算很重要。
Proc Natl Acad Sci U S A. 2025 Jan 21;122(3):e2316745122. doi: 10.1073/pnas.2316745122. Epub 2025 Jan 16.
5
Ornstein-Uhlenbeck Adaptation as a Mechanism for Learning in Brains and Machines.作为大脑和机器学习机制的奥恩斯坦-乌伦贝克适应
Entropy (Basel). 2024 Dec 22;26(12):1125. doi: 10.3390/e26121125.
6
Chaotic neural dynamics facilitate probabilistic computations through sampling.混沌神经网络动力学通过采样促进概率计算。
Proc Natl Acad Sci U S A. 2024 Apr 30;121(18):e2312992121. doi: 10.1073/pnas.2312992121. Epub 2024 Apr 22.
7
Specific connectivity optimizes learning in thalamocortical loops.特定的连接优化了丘脑皮质回路中的学习。
Cell Rep. 2024 Apr 23;43(4):114059. doi: 10.1016/j.celrep.2024.114059. Epub 2024 Apr 10.
8
Sensory input to cortex encoded on low-dimensional periphery-correlated subspaces.输入到皮层的感觉信息编码在低维周边相关子空间上。
PNAS Nexus. 2024 Jan 10;3(1):pgae010. doi: 10.1093/pnasnexus/pgae010. eCollection 2024 Jan.
9
Distinguishing Learning Rules with Brain Machine Interfaces.通过脑机接口区分学习规则
Adv Neural Inf Process Syst. 2022 Dec;35:25937-25950.
10
Neural spiking for causal inference and learning.神经尖峰用于因果推理和学习。
PLoS Comput Biol. 2023 Apr 4;19(4):e1011005. doi: 10.1371/journal.pcbi.1011005. eCollection 2023 Apr.
PLoS Comput Biol. 2016 Feb 29;12(2):e1004792. doi: 10.1371/journal.pcbi.1004792. eCollection 2016 Feb.
4
A neural network that finds a naturalistic solution for the production of muscle activity.一种为肌肉活动产生寻找自然主义解决方案的神经网络。
Nat Neurosci. 2015 Jul;18(7):1025-33. doi: 10.1038/nn.4042. Epub 2015 Jun 15.
5
'Activity-silent' working memory in prefrontal cortex: a dynamic coding framework.前额叶皮质中的“静息活动”工作记忆:一个动态编码框架
Trends Cogn Sci. 2015 Jul;19(7):394-405. doi: 10.1016/j.tics.2015.05.004. Epub 2015 Jun 4.
6
A category-free neural population supports evolving demands during decision-making.无类别神经群体在决策过程中支持不断变化的需求。
Nat Neurosci. 2014 Dec;17(12):1784-1792. doi: 10.1038/nn.3865. Epub 2014 Nov 10.
7
Benchmarking of dynamic simulation predictions in two software platforms using an upper limb musculoskeletal model.使用上肢肌肉骨骼模型对两个软件平台中的动态模拟预测进行基准测试。
Comput Methods Biomech Biomed Engin. 2015;18(13):1445-58. doi: 10.1080/10255842.2014.916698. Epub 2014 Jul 4.
8
Optimal control of transient dynamics in balanced networks supports generation of complex movements.平衡网络中瞬态动力学的最优控制支持复杂运动的产生。
Neuron. 2014 Jun 18;82(6):1394-406. doi: 10.1016/j.neuron.2014.04.045.
9
Context-dependent computation by recurrent dynamics in prefrontal cortex.前额叶皮层中依赖上下文的递归动力学计算。
Nature. 2013 Nov 7;503(7474):78-84. doi: 10.1038/nature12742.
10
Robust timing and motor patterns by taming chaos in recurrent neural networks.通过驯服递归神经网络中的混沌来实现强健的时间和运动模式。
Nat Neurosci. 2013 Jul;16(7):925-33. doi: 10.1038/nn.3405. Epub 2013 May 26.