MRC Brain Network Dynamics Unit, Nuffield Department of Clinical Neurosciences, University of Oxford, Oxford OX3 9DU, U.K.
Department of Experimental Psychology, University of Oxford, Oxford OX3 9DU, U.K.
Neural Comput. 2022 Jan 14;34(2):307-337. doi: 10.1162/neco_a_01455.
Reinforcement learning involves updating estimates of the value of states and actions on the basis of experience. Previous work has shown that in humans, reinforcement learning exhibits a confirmatory bias: when the value of a chosen option is being updated, estimates are revised more radically following positive than negative reward prediction errors, but the converse is observed when updating the unchosen option value estimate. Here, we simulate performance on a multi-arm bandit task to examine the consequences of a confirmatory bias for reward harvesting. We report a paradoxical finding: that confirmatory biases allow the agent to maximize reward relative to an unbiased updating rule. This principle holds over a wide range of experimental settings and is most influential when decisions are corrupted by noise. We show that this occurs because on average, confirmatory biases lead to overestimating the value of more valuable bandits and underestimating the value of less valuable bandits, rendering decisions overall more robust in the face of noise. Our results show how apparently suboptimal learning rules can in fact be reward maximizing if decisions are made with finite computational precision.
强化学习涉及根据经验更新状态和动作价值的估计。以前的工作表明,在人类中,强化学习表现出确认偏差:当选择选项的价值正在被更新时,与负面奖励预测误差相比,正奖励预测误差之后的估计会更激进地修正,但在更新未选中选项的价值估计时则会观察到相反的情况。在这里,我们模拟多臂赌博任务的表现,以检查确认偏差对奖励获取的后果。我们报告了一个矛盾的发现:确认偏差允许代理人相对于无偏差的更新规则最大化奖励。这一原则适用于广泛的实验设置,并且在决策受到噪声干扰时最具影响力。我们表明,这是因为平均而言,确认偏差导致高估更有价值的赌博机的价值,低估价值较低的赌博机的价值,从而使决策在面对噪声时更加稳健。我们的结果表明,如果决策是在有限的计算精度下做出的,那么显然次优的学习规则实际上可以实现奖励最大化。