• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

用于认知和基于价值任务的循环神经网络的基于奖励的训练。

Reward-based training of recurrent neural networks for cognitive and value-based tasks.

作者信息

Song H Francis, Yang Guangyu R, Wang Xiao-Jing

机构信息

Center for Neural Science, New York University, New York, United States.

NYU-ECNU Institute of Brain and Cognitive Science, NYU Shanghai, Shanghai, China.

出版信息

Elife. 2017 Jan 13;6:e21492. doi: 10.7554/eLife.21492.

DOI:10.7554/eLife.21492
PMID:28084991
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC5293493/
Abstract

Trained neural network models, which exhibit features of neural activity recorded from behaving animals, may provide insights into the circuit mechanisms of cognitive functions through systematic analysis of network activity and connectivity. However, in contrast to the graded error signals commonly used to train networks through supervised learning, animals learn from reward feedback on definite actions through reinforcement learning. Reward maximization is particularly relevant when optimal behavior depends on an animal's internal judgment of confidence or subjective preferences. Here, we implement reward-based training of recurrent neural networks in which a value network guides learning by using the activity of the decision network to predict future reward. We show that such models capture behavioral and electrophysiological findings from well-known experimental paradigms. Our work provides a unified framework for investigating diverse cognitive and value-based computations, and predicts a role for value representation that is essential for learning, but not executing, a task.

摘要

经过训练的神经网络模型展现出从行为动物身上记录到的神经活动特征,通过对网络活动和连通性的系统分析,可能会为认知功能的神经回路机制提供见解。然而,与通常用于通过监督学习训练网络的渐变误差信号不同,动物通过强化学习从对明确行动的奖励反馈中学习。当最优行为取决于动物对信心或主观偏好的内部判断时,奖励最大化尤为重要。在此,我们实现了循环神经网络的基于奖励的训练,其中价值网络通过利用决策网络的活动来预测未来奖励来指导学习。我们表明,此类模型捕捉了来自著名实验范式的行为和电生理结果。我们的工作为研究各种认知和基于价值的计算提供了一个统一的框架,并预测了价值表征在学习而非执行任务中所起的关键作用。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/dcfd/5293493/6a6750709a46/elife-21492-fig4-figsupp3.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/dcfd/5293493/b146915c7a78/elife-21492-fig1.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/dcfd/5293493/8369fad7aa54/elife-21492-fig1-figsupp1.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/dcfd/5293493/cfa37139bcdb/elife-21492-fig1-figsupp2.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/dcfd/5293493/bbfc770883c6/elife-21492-fig1-figsupp3.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/dcfd/5293493/b3dfcb2f2055/elife-21492-fig1-figsupp4.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/dcfd/5293493/009af671d3ba/elife-21492-fig2.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/dcfd/5293493/5cc8559a86ae/elife-21492-fig2-figsupp1.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/dcfd/5293493/9417bffd10c9/elife-21492-fig2-figsupp2.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/dcfd/5293493/7fab989ed256/elife-21492-fig2-figsupp3.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/dcfd/5293493/dff68dff871f/elife-21492-fig3.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/dcfd/5293493/1f5218838620/elife-21492-fig3-figsupp1.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/dcfd/5293493/c2bc2990cf23/elife-21492-fig4.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/dcfd/5293493/5d9875a80543/elife-21492-fig4-figsupp1.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/dcfd/5293493/54548d757706/elife-21492-fig4-figsupp2.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/dcfd/5293493/6a6750709a46/elife-21492-fig4-figsupp3.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/dcfd/5293493/b146915c7a78/elife-21492-fig1.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/dcfd/5293493/8369fad7aa54/elife-21492-fig1-figsupp1.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/dcfd/5293493/cfa37139bcdb/elife-21492-fig1-figsupp2.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/dcfd/5293493/bbfc770883c6/elife-21492-fig1-figsupp3.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/dcfd/5293493/b3dfcb2f2055/elife-21492-fig1-figsupp4.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/dcfd/5293493/009af671d3ba/elife-21492-fig2.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/dcfd/5293493/5cc8559a86ae/elife-21492-fig2-figsupp1.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/dcfd/5293493/9417bffd10c9/elife-21492-fig2-figsupp2.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/dcfd/5293493/7fab989ed256/elife-21492-fig2-figsupp3.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/dcfd/5293493/dff68dff871f/elife-21492-fig3.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/dcfd/5293493/1f5218838620/elife-21492-fig3-figsupp1.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/dcfd/5293493/c2bc2990cf23/elife-21492-fig4.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/dcfd/5293493/5d9875a80543/elife-21492-fig4-figsupp1.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/dcfd/5293493/54548d757706/elife-21492-fig4-figsupp2.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/dcfd/5293493/6a6750709a46/elife-21492-fig4-figsupp3.jpg

相似文献

1
Reward-based training of recurrent neural networks for cognitive and value-based tasks.用于认知和基于价值任务的循环神经网络的基于奖励的训练。
Elife. 2017 Jan 13;6:e21492. doi: 10.7554/eLife.21492.
2
Reward-dependent learning in neuronal networks for planning and decision making.用于规划和决策的神经网络中基于奖励的学习。
Prog Brain Res. 2000;126:217-29. doi: 10.1016/S0079-6123(00)26016-0.
3
Mechanisms of reinforcement learning and decision making in the primate dorsolateral prefrontal cortex.灵长类动物背外侧前额叶皮层中的强化学习与决策机制。
Ann N Y Acad Sci. 2007 May;1104:108-22. doi: 10.1196/annals.1390.007. Epub 2007 Mar 8.
4
Emphasizing the "positive" in positive reinforcement: using nonbinary rewarding for training monkeys on cognitive tasks.强调正强化中的“积极面”:在认知任务训练中对猴子使用非二元奖励。
J Neurophysiol. 2018 Jul 1;120(1):115-128. doi: 10.1152/jn.00572.2017. Epub 2018 Apr 4.
5
A neural network model with dopamine-like reinforcement signal that learns a spatial delayed response task.一种具有类似多巴胺强化信号的神经网络模型,用于学习空间延迟反应任务。
Neuroscience. 1999;91(3):871-90. doi: 10.1016/s0306-4522(98)00697-6.
6
Training Excitatory-Inhibitory Recurrent Neural Networks for Cognitive Tasks: A Simple and Flexible Framework.用于认知任务的兴奋性-抑制性循环神经网络训练:一个简单灵活的框架。
PLoS Comput Biol. 2016 Feb 29;12(2):e1004792. doi: 10.1371/journal.pcbi.1004792. eCollection 2016 Feb.
7
PsychRNN: An Accessible and Flexible Python Package for Training Recurrent Neural Network Models on Cognitive Tasks.PsychRNN:一个用于在认知任务上训练递归神经网络模型的易于访问和灵活的 Python 包。
eNeuro. 2021 Jan 15;8(1). doi: 10.1523/ENEURO.0427-20.2020. Print 2021 Jan-Feb.
8
Working Memory and Decision-Making in a Frontoparietal Circuit Model.前额顶叶回路模型中的工作记忆与决策制定
J Neurosci. 2017 Dec 13;37(50):12167-12186. doi: 10.1523/JNEUROSCI.0343-17.2017. Epub 2017 Nov 7.
9
The decision to engage cognitive control is driven by expected reward-value: neural and behavioral evidence.作出投入认知控制的决策是由预期奖赏值驱动的:神经和行为证据。
PLoS One. 2012;7(12):e51637. doi: 10.1371/journal.pone.0051637. Epub 2012 Dec 19.
10
Goal-Directed Decision Making with Spiking Neurons.基于脉冲神经元的目标导向决策
J Neurosci. 2016 Feb 3;36(5):1529-46. doi: 10.1523/JNEUROSCI.2854-15.2016.

引用本文的文献

1
Modelling cognitive flexibility with deep neural networks.使用深度神经网络对认知灵活性进行建模。
Curr Opin Behav Sci. 2024 Jun;57:101361. doi: 10.1016/j.cobeha.2024.101361.
2
The effects of the post-delay epochs on working memory error reduction.延迟后阶段对工作记忆错误减少的影响。
PLoS Comput Biol. 2025 May 13;21(5):e1013083. doi: 10.1371/journal.pcbi.1013083. eCollection 2025 May.
3
A Neural Circuit Framework for Economic Choice: From Building Blocks of Valuation to Compositionality in Multitasking.经济选择的神经回路框架:从估值的基本要素到多任务处理中的组合性

本文引用的文献

1
Intrinsically-generated fluctuating activity in excitatory-inhibitory networks.兴奋性-抑制性网络中内在产生的波动活动。
PLoS Comput Biol. 2017 Apr 24;13(4):e1005498. doi: 10.1371/journal.pcbi.1005498. eCollection 2017 Apr.
2
Biologically plausible learning in recurrent neural networks reproduces neural dynamics observed during cognitive tasks.循环神经网络中符合生物学原理的学习再现了认知任务期间观察到的神经动力学。
Elife. 2017 Feb 23;6:e20899. doi: 10.7554/eLife.20899.
3
Random synaptic feedback weights support error backpropagation for deep learning.
bioRxiv. 2025 Mar 13:2025.03.13.643098. doi: 10.1101/2025.03.13.643098.
4
Striatal arbitration between choice strategies guides few-shot adaptation.选择策略之间的纹状体仲裁引导少样本适应。
Nat Commun. 2025 Feb 20;16(1):1811. doi: 10.1038/s41467-025-57049-5.
5
A neural implementation model of feedback-based motor learning.基于反馈的运动学习的神经实现模型。
Nat Commun. 2025 Feb 20;16(1):1805. doi: 10.1038/s41467-024-54738-5.
6
Cerebellar-driven cortical dynamics can enable task acquisition, switching and consolidation.小脑驱动的皮层动力学能够促成任务习得、转换及巩固。
Nat Commun. 2024 Dec 30;15(1):10913. doi: 10.1038/s41467-024-55315-6.
7
A working memory model based on recurrent neural networks using reinforcement learning.一种基于使用强化学习的递归神经网络的工作记忆模型。
Cogn Neurodyn. 2024 Oct;18(5):3031-3058. doi: 10.1007/s11571-024-10137-6. Epub 2024 Jun 13.
8
Neural basis of concurrent deliberation toward a choice and degree of confidence.对选择和信心程度进行同步思考的神经基础。
bioRxiv. 2024 Sep 27:2024.08.06.606833. doi: 10.1101/2024.08.06.606833.
9
Neural representational geometries reflect behavioral differences in monkeys and recurrent neural networks.神经表征几何结构反映了猴子和循环神经网络中的行为差异。
Nat Commun. 2024 Aug 1;15(1):6479. doi: 10.1038/s41467-024-50503-w.
10
Flexible multitask computation in recurrent networks utilizes shared dynamical motifs.递归网络中的灵活多任务计算利用了共享的动态模式。
Nat Neurosci. 2024 Jul;27(7):1349-1363. doi: 10.1038/s41593-024-01668-6. Epub 2024 Jul 9.
随机突触反馈权重支持深度学习的误差反向传播。
Nat Commun. 2016 Nov 8;7:13276. doi: 10.1038/ncomms13276.
4
Toward an Integration of Deep Learning and Neuroscience.迈向深度学习与神经科学的整合。
Front Comput Neurosci. 2016 Sep 14;10:94. doi: 10.3389/fncom.2016.00094. eCollection 2016.
5
Reinforcement learning with Marr.与马尔理论相结合的强化学习
Curr Opin Behav Sci. 2016 Oct;11:67-73. doi: 10.1016/j.cobeha.2016.04.005.
6
Recurrent Network Models of Sequence Generation and Memory.序列生成与记忆的循环网络模型。
Neuron. 2016 Apr 6;90(1):128-42. doi: 10.1016/j.neuron.2016.02.009. Epub 2016 Mar 10.
7
Training Excitatory-Inhibitory Recurrent Neural Networks for Cognitive Tasks: A Simple and Flexible Framework.用于认知任务的兴奋性-抑制性循环神经网络训练:一个简单灵活的框架。
PLoS Comput Biol. 2016 Feb 29;12(2):e1004792. doi: 10.1371/journal.pcbi.1004792. eCollection 2016 Feb.
8
Explicit information for category-orthogonal object properties increases along the ventral stream.明确的类别正交物体属性信息沿腹侧流增加。
Nat Neurosci. 2016 Apr;19(4):613-22. doi: 10.1038/nn.4247. Epub 2016 Feb 22.
9
Goal-Directed Decision Making with Spiking Neurons.基于脉冲神经元的目标导向决策
J Neurosci. 2016 Feb 3;36(5):1529-46. doi: 10.1523/JNEUROSCI.2854-15.2016.
10
Reinforcement Learning of Linking and Tracing Contours in Recurrent Neural Networks.循环神经网络中轮廓连接与追踪的强化学习
PLoS Comput Biol. 2015 Oct 23;11(10):e1004489. doi: 10.1371/journal.pcbi.1004489. eCollection 2015 Oct.