Gershman Samuel J, Daw Nathaniel D
Department of Psychology and Center for Brain Science, Harvard University, Cambridge, Massachusetts 02138; email:
Princeton Neuroscience Institute and Department of Psychology, Princeton University, Princeton, New Jersey 08544.
Annu Rev Psychol. 2017 Jan 3;68:101-128. doi: 10.1146/annurev-psych-122414-033625. Epub 2016 Sep 2.
We review the psychology and neuroscience of reinforcement learning (RL), which has experienced significant progress in the past two decades, enabled by the comprehensive experimental study of simple learning and decision-making tasks. However, one challenge in the study of RL is computational: The simplicity of these tasks ignores important aspects of reinforcement learning in the real world: (a) State spaces are high-dimensional, continuous, and partially observable; this implies that (b) data are relatively sparse and, indeed, precisely the same situation may never be encountered twice; furthermore, (c) rewards depend on the long-term consequences of actions in ways that violate the classical assumptions that make RL tractable. A seemingly distinct challenge is that, cognitively, theories of RL have largely involved procedural and semantic memory, the way in which knowledge about action values or world models extracted gradually from many experiences can drive choice. This focus on semantic memory leaves out many aspects of memory, such as episodic memory, related to the traces of individual events. We suggest that these two challenges are related. The computational challenge can be dealt with, in part, by endowing RL systems with episodic memory, allowing them to (a) efficiently approximate value functions over complex state spaces, (b) learn with very little data, and (c) bridge long-term dependencies between actions and rewards. We review the computational theory underlying this proposal and the empirical evidence to support it. Our proposal suggests that the ubiquitous and diverse roles of memory in RL may function as part of an integrated learning system.
我们回顾了强化学习(RL)的心理学和神经科学,在过去二十年中,通过对简单学习和决策任务的全面实验研究,强化学习取得了显著进展。然而,强化学习研究中的一个挑战是计算方面的:这些任务的简单性忽略了现实世界中强化学习的重要方面:(a)状态空间是高维的、连续的且部分可观察的;这意味着(b)数据相对稀疏,实际上,完全相同的情况可能永远不会被再次遇到;此外,(c)奖励取决于行动的长期后果,这违反了使强化学习易于处理的经典假设。一个看似不同的挑战是,在认知方面,强化学习理论主要涉及程序性和语义记忆,即从许多经验中逐渐提取的关于行动价值或世界模型的知识能够驱动选择的方式。对语义记忆的这种关注忽略了记忆的许多方面,比如情景记忆,它与个体事件的痕迹有关。我们认为这两个挑战是相关的。计算方面的挑战可以部分地通过赋予强化学习系统情景记忆来解决,这使它们能够(a)在复杂状态空间上有效地近似价值函数,(b)用很少的数据进行学习,以及(c)弥合行动和奖励之间的长期依赖关系。我们回顾了这一建议背后的计算理论以及支持它的经验证据。我们的建议表明,记忆在强化学习中普遍且多样的作用可能是一个综合学习系统的一部分。