Suppr超能文献

概率性奖励消退中的价值学习与唤醒:多巴胺在修正时间差分模型中的作用

Value learning and arousal in the extinction of probabilistic rewards: the role of dopamine in a modified temporal difference model.

作者信息

Song Minryung R, Fellous Jean-Marc

机构信息

Department of Bio and Brain Engineering, Korea Advanced Institute of Science and Technology (KAIST), Daejeon, South Korea.

Graduate Interdisciplinary Program in Neuroscience, University of Arizona, Tucson, Arizona, United States of America ; Department of Psychology, University of Arizona, Tucson, Arizona, United States of America ; Department of Applied Mathematics, University of Arizona, Tucson, Arizona, United States of America.

出版信息

PLoS One. 2014 Feb 26;9(2):e89494. doi: 10.1371/journal.pone.0089494. eCollection 2014.

Abstract

Because most rewarding events are probabilistic and changing, the extinction of probabilistic rewards is important for survival. It has been proposed that the extinction of probabilistic rewards depends on arousal and the amount of learning of reward values. Midbrain dopamine neurons were suggested to play a role in both arousal and learning reward values. Despite extensive research on modeling dopaminergic activity in reward learning (e.g. temporal difference models), few studies have been done on modeling its role in arousal. Although temporal difference models capture key characteristics of dopaminergic activity during the extinction of deterministic rewards, they have been less successful at simulating the extinction of probabilistic rewards. By adding an arousal signal to a temporal difference model, we were able to simulate the extinction of probabilistic rewards and its dependence on the amount of learning. Our simulations propose that arousal allows the probability of reward to have lasting effects on the updating of reward value, which slows the extinction of low probability rewards. Using this model, we predicted that, by signaling the prediction error, dopamine determines the learned reward value that has to be extinguished during extinction and participates in regulating the size of the arousal signal that controls the learning rate. These predictions were supported by pharmacological experiments in rats.

摘要

由于大多数有奖励的事件都是概率性的且不断变化的,概率性奖励的消退对生存至关重要。有人提出,概率性奖励的消退取决于唤醒以及奖励值的学习量。中脑多巴胺神经元被认为在唤醒和学习奖励值方面都发挥作用。尽管在奖励学习中对多巴胺能活动建模进行了广泛研究(例如时间差分模型),但对其在唤醒中作用的建模研究却很少。虽然时间差分模型捕捉了确定性奖励消退期间多巴胺能活动的关键特征,但它们在模拟概率性奖励的消退方面不太成功。通过在时间差分模型中添加一个唤醒信号,我们能够模拟概率性奖励的消退及其对学习量的依赖性。我们的模拟表明,唤醒使得奖励概率对奖励值更新产生持久影响,这减缓了低概率奖励的消退。使用这个模型,我们预测,通过发出预测误差信号,多巴胺决定了在消退过程中必须被消除的习得奖励值,并参与调节控制学习率的唤醒信号的大小。这些预测得到了大鼠药理学实验的支持。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验