Suppr超能文献

工作记忆负荷增强奖励预测误差。

Working Memory Load Strengthens Reward Prediction Errors.

作者信息

Collins Anne G E, Ciullo Brittany, Frank Michael J, Badre David

机构信息

Department of Psychology and

Helen Wills Neuroscience Institute, University of California, Berkeley, California 94720, and.

出版信息

J Neurosci. 2017 Apr 19;37(16):4332-4342. doi: 10.1523/JNEUROSCI.2700-16.2017. Epub 2017 Mar 20.

Abstract

Reinforcement learning (RL) in simple instrumental tasks is usually modeled as a monolithic process in which reward prediction errors (RPEs) are used to update expected values of choice options. This modeling ignores the different contributions of different memory and decision-making systems thought to contribute even to simple learning. In an fMRI experiment, we investigated how working memory (WM) and incremental RL processes interact to guide human learning. WM load was manipulated by varying the number of stimuli to be learned across blocks. Behavioral results and computational modeling confirmed that learning was best explained as a mixture of two mechanisms: a fast, capacity-limited, and delay-sensitive WM process together with slower RL. Model-based analysis of fMRI data showed that striatum and lateral prefrontal cortex were sensitive to RPE, as shown previously, but, critically, these signals were reduced when the learning problem was within capacity of WM. The degree of this neural interaction related to individual differences in the use of WM to guide behavioral learning. These results indicate that the two systems do not process information independently, but rather interact during learning. Reinforcement learning (RL) theory has been remarkably productive at improving our understanding of instrumental learning as well as dopaminergic and striatal network function across many mammalian species. However, this neural network is only one contributor to human learning and other mechanisms such as prefrontal cortex working memory also play a key role. Our results also show that these other players interact with the dopaminergic RL system, interfering with its key computation of reward prediction errors.

摘要

在简单的工具性任务中,强化学习(RL)通常被建模为一个整体过程,其中奖励预测误差(RPE)用于更新选择选项的期望值。这种建模忽略了不同记忆和决策系统的不同贡献,而这些系统被认为即使在简单学习中也发挥作用。在一项功能磁共振成像(fMRI)实验中,我们研究了工作记忆(WM)和增量RL过程如何相互作用以指导人类学习。通过在不同组块中改变要学习的刺激数量来操纵WM负荷。行为结果和计算建模证实,学习最好解释为两种机制的混合:一种快速、容量有限且对延迟敏感的WM过程以及较慢的RL。基于模型的fMRI数据分析表明,纹状体和外侧前额叶皮层对RPE敏感,如先前所示,但关键的是,当学习问题在WM容量范围内时,这些信号会减弱。这种神经相互作用的程度与使用WM指导行为学习的个体差异有关。这些结果表明,这两个系统并非独立处理信息,而是在学习过程中相互作用。强化学习(RL)理论在提高我们对工具性学习以及许多哺乳动物物种的多巴胺能和纹状体网络功能的理解方面取得了显著成效。然而,这个神经网络只是人类学习的一个贡献因素,其他机制如前额叶皮层工作记忆也起着关键作用。我们的结果还表明,这些其他因素与多巴胺能RL系统相互作用,干扰其奖励预测误差的关键计算。

相似文献

1
Working Memory Load Strengthens Reward Prediction Errors.工作记忆负荷增强奖励预测误差。
J Neurosci. 2017 Apr 19;37(16):4332-4342. doi: 10.1523/JNEUROSCI.2700-16.2017. Epub 2017 Mar 20.
6
9
Beta Oscillations in Monkey Striatum Encode Reward Prediction Error Signals.猴子纹状体中的β振荡编码奖励预测误差信号。
J Neurosci. 2023 May 3;43(18):3339-3352. doi: 10.1523/JNEUROSCI.0952-22.2023. Epub 2023 Apr 4.
10
Multiple memory systems as substrates for multiple decision systems.多种记忆系统作为多种决策系统的基础。
Neurobiol Learn Mem. 2015 Jan;117:4-13. doi: 10.1016/j.nlm.2014.04.014. Epub 2014 May 15.

引用本文的文献

2
Social inequity disrupts reward-based learning.社会不平等会扰乱基于奖励的学习。
Commun Psychol. 2025 Aug 16;3(1):125. doi: 10.1038/s44271-025-00300-y.
8
Does an external distractor interfere with the triggering of item-specific control?外部干扰器是否会干扰特定项目控制的触发?
J Exp Psychol Hum Percept Perform. 2025 Jun;51(6):808-825. doi: 10.1037/xhp0001323. Epub 2025 Mar 31.
9
Policy Complexity Suppresses Dopamine Responses.政策复杂性抑制多巴胺反应。
J Neurosci. 2025 Feb 26;45(9):e1756242024. doi: 10.1523/JNEUROSCI.1756-24.2024.

本文引用的文献

6
Model-based choices involve prospective neural activity.基于模型的选择涉及前瞻性神经活动。
Nat Neurosci. 2015 May;18(5):767-72. doi: 10.1038/nn.3981. Epub 2015 Mar 23.
8
The cognitive neuroscience of working memory.工作记忆的认知神经科学
Annu Rev Psychol. 2015 Jan 3;66:115-42. doi: 10.1146/annurev-psych-010814-015031. Epub 2014 Sep 19.

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验