Wurm Franz, Walentowska Wioleta, Ernst Benjamin, Severo Mario Carlo, Pourtois Gilles, Steinhauser Marco
Catholic University of Eichstätt-Ingolstadt, Germany.
Leiden University.
J Cogn Neurosci. 2021 Dec 6;34(1):34-53. doi: 10.1162/jocn_a_01777.
The goal of temporal difference (TD) reinforcement learning is to maximize outcomes and improve future decision-making. It does so by utilizing a prediction error (PE), which quantifies the difference between the expected and the obtained outcome. In gambling tasks, however, decision-making cannot be improved because of the lack of learnability. On the basis of the idea that TD utilizes two independent bits of information from the PE (valence and surprise), we asked which of these aspects is affected when a task is not learnable. We contrasted behavioral data and ERPs in a learning variant and a gambling variant of a simple two-armed bandit task, in which outcome sequences were matched across tasks. Participants were explicitly informed that feedback could be used to improve performance in the learning task but not in the gambling task, and we predicted a corresponding modulation of the aspects of the PE. We used a model-based analysis of ERP data to extract the neural footprints of the valence and surprise information in the two tasks. Our results revealed that task learnability modulates reinforcement learning via the suppression of surprise processing but leaves the processing of valence unaffected. On the basis of our model and the data, we propose that task learnability can selectively suppress TD learning as well as alter behavioral adaptation based on a flexible cost-benefit arbitration.
时间差分(TD)强化学习的目标是最大化结果并改善未来的决策。它通过利用预测误差(PE)来实现这一目标,预测误差量化了预期结果与实际获得结果之间的差异。然而,在赌博任务中,由于缺乏可学习性,决策无法得到改善。基于TD利用来自预测误差的两个独立信息(效价和意外性)这一观点,我们探究了在任务不可学习时,这些方面中的哪一个会受到影响。我们在一个简单的双臂赌博任务的学习变体和赌博变体中对比了行为数据和事件相关电位(ERP),其中两个任务的结果序列是匹配的。参与者被明确告知反馈可用于提高学习任务中的表现,但不能用于赌博任务,并且我们预测了预测误差各方面的相应调节。我们使用基于模型的ERP数据分析来提取两个任务中效价和意外性信息的神经印记。我们的结果表明,任务可学习性通过抑制意外性处理来调节强化学习,但不会影响效价处理。基于我们的模型和数据,我们提出任务可学习性可以选择性地抑制TD学习,并基于灵活的成本效益权衡改变行为适应性。