Suppr超能文献

在试验间隔期间操纵奖励价值的修正会增加符号跟踪和多巴胺释放。

Manipulating the revision of reward value during the intertrial interval increases sign tracking and dopamine release.

机构信息

Department of Psychology, University of Maryland, College Park, Maryland, United States of America.

Neuroscience and Cognitive Science Program, University of Maryland, College Park, Maryland, United States of America.

出版信息

PLoS Biol. 2018 Sep 26;16(9):e2004015. doi: 10.1371/journal.pbio.2004015. eCollection 2018 Sep.

Abstract

Recent computational models of sign tracking (ST) and goal tracking (GT) have accounted for observations that dopamine (DA) is not necessary for all forms of learning and have provided a set of predictions to further their validity. Among these, a central prediction is that manipulating the intertrial interval (ITI) during autoshaping should change the relative ST-GT proportion as well as DA phasic responses. Here, we tested these predictions and found that lengthening the ITI increased ST, i.e., behavioral engagement with conditioned stimuli (CS) and cue-induced phasic DA release. Importantly, DA release was also present at the time of reward delivery, even after learning, and DA release was correlated with time spent in the food cup during the ITI. During conditioning with shorter ITIs, GT was prominent (i.e., engagement with food cup), and DA release responded to the CS while being absent at the time of reward delivery after learning. Hence, shorter ITIs restored the classical DA reward prediction error (RPE) pattern. These results validate the computational hypotheses, opening new perspectives on the understanding of individual differences in Pavlovian conditioning and DA signaling.

摘要

最近的符号追踪 (ST) 和目标追踪 (GT) 的计算模型解释了多巴胺 (DA) 并不是所有学习形式所必需的,并提出了一系列预测来进一步验证其有效性。其中,一个核心预测是,在自动塑造过程中操纵试验间间隔 (ITI) 应该改变相对 ST-GT 比例以及 DA 相位反应。在这里,我们测试了这些预测,发现延长 ITI 会增加 ST,即与条件刺激 (CS) 的行为接触以及线索诱导的 DA 释放。重要的是,即使在学习之后,在奖励传递时也存在 DA 释放,并且 DA 释放与 ITI 期间在食物杯中花费的时间相关。在较短 ITI 的条件作用下,GT 很突出(即与食物杯的接触),并且在学习后奖励传递时不存在时,DA 释放会对 CS 做出反应。因此,较短的 ITI 恢复了经典的 DA 奖励预测误差 (RPE) 模式。这些结果验证了计算假设,为理解巴甫洛夫条件作用和 DA 信号中的个体差异开辟了新的视角。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5876/6175531/1781a76cff07/pbio.2004015.g001.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验