Bavard Sophie, Rustichini Aldo, Palminteri Stefano
Laboratoire de Neurosciences Cognitives et Computationnelles, Institut National de la Santé et Recherche Médicale, 29 rue d'Ulm, 75005 Paris, France.
Ecole normale supérieure, 29 rue d'Ulm, 75005 Paris, France.
Sci Adv. 2021 Apr 2;7(14). doi: 10.1126/sciadv.abe0340. Print 2021 Apr.
Evidence suggests that economic values are rescaled as a function of the range of the available options. Although locally adaptive, range adaptation has been shown to lead to suboptimal choices, particularly notable in reinforcement learning (RL) situations when options are extrapolated from their original context to a new one. Range adaptation can be seen as the result of an adaptive coding process aiming at increasing the signal-to-noise ratio. However, this hypothesis leads to a counterintuitive prediction: Decreasing task difficulty should increase range adaptation and, consequently, extrapolation errors. Here, we tested the paradoxical relation between range adaptation and performance in a large sample of participants performing variants of an RL task, where we manipulated task difficulty. Results confirmed that range adaptation induces systematic extrapolation errors and is stronger when decreasing task difficulty. Last, we propose a range-adapting model and show that it is able to parsimoniously capture all the behavioral results.
有证据表明,经济价值会根据可用选项的范围进行重新缩放。尽管范围适应具有局部适应性,但已表明它会导致次优选择,在强化学习(RL)情境中尤其明显,即当选项从其原始情境外推到新情境时。范围适应可被视为旨在提高信噪比的自适应编码过程的结果。然而,这一假设导致了一个违反直觉的预测:降低任务难度应该会增加范围适应,进而增加外推误差。在此,我们在执行RL任务变体的大量参与者样本中测试了范围适应与表现之间的矛盾关系,我们对任务难度进行了操纵。结果证实,范围适应会引发系统性外推误差,且在降低任务难度时更强。最后,我们提出了一个范围适应模型,并表明它能够简洁地捕捉所有行为结果。