Suppr超能文献

陈述性学习中奖励预测误差的振荡特征。

Oscillatory signatures of reward prediction errors in declarative learning.

机构信息

Department of Experimental Psychology, Ghent University, Henri Dunantlaan 2, B-9000, Ghent, Belgium.

Department of Experimental Psychology, Ghent University, Henri Dunantlaan 2, B-9000, Ghent, Belgium.

出版信息

Neuroimage. 2019 Feb 1;186:137-145. doi: 10.1016/j.neuroimage.2018.10.083. Epub 2018 Nov 2.

Abstract

Reward prediction errors (RPEs) are crucial to learning. Whereas these mismatches between reward expectation and reward outcome are known to drive procedural learning, their role in declarative learning remains underexplored. Earlier work from our lab addressed this, and consistently found that signed reward prediction errors (SRPEs; "better-than-expected" signals) boost declarative learning. In the current EEG study, we sought to explore the neural signatures of SRPEs. Participants studied 60 Dutch-Swahili word pairs while RPE magnitudes were parametrically manipulated. Behaviorally, we replicated our previous findings that SRPEs drive declarative learning, with increased recognition for word pairs accompanied by large, positive RPEs. In the EEG data, at the start of reward feedback processing, we found an oscillatory (theta) signature consistent with unsigned reward prediction errors (URPEs; "different-than-expected" signals). Slightly later during reward feedback processing, we observed oscillatory (high-beta and high-alpha) signatures for SRPEs during reward feedback, similar to SRPE signatures during procedural learning. These findings illuminate the time course of neural oscillations in processing reward during declarative learning, providing important constraints for future theoretical work.

摘要

奖励预测误差(RPEs)对于学习至关重要。虽然这些奖励预期与奖励结果之间的不匹配已知会驱动程序学习,但它们在陈述性学习中的作用仍未得到充分探索。我们实验室的早期工作解决了这个问题,并一致发现有符号的奖励预测误差(SRPEs;“好于预期”的信号)会促进陈述性学习。在当前的 EEG 研究中,我们试图探索 SRPEs 的神经特征。参与者在 60 对荷兰语-斯瓦希里语单词对进行学习的同时,对 RPE 幅度进行参数化操纵。在行为上,我们复制了之前的发现,即 SRPEs 驱动陈述性学习,伴随着大的、正的 RPEs,单词对的识别能力增强。在 EEG 数据中,在奖励反馈处理开始时,我们发现了一个与无符号奖励预测误差(URPEs;“不同预期”的信号)一致的振荡(theta)特征。在奖励反馈处理过程中稍晚些时候,我们观察到在奖励反馈过程中出现了与程序学习中相似的 SRPE 振荡(高 beta 和高 alpha)特征。这些发现阐明了在陈述性学习中处理奖励时神经振荡的时间进程,为未来的理论工作提供了重要的限制。

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验