Suppr超能文献

强化学习中确认偏差的规范解释。

A Normative Account of Confirmation Bias During Reinforcement Learning.

机构信息

MRC Brain Network Dynamics Unit, Nuffield Department of Clinical Neurosciences, University of Oxford, Oxford OX3 9DU, U.K.

Department of Experimental Psychology, University of Oxford, Oxford OX3 9DU, U.K.

出版信息

Neural Comput. 2022 Jan 14;34(2):307-337. doi: 10.1162/neco_a_01455.

Abstract

Reinforcement learning involves updating estimates of the value of states and actions on the basis of experience. Previous work has shown that in humans, reinforcement learning exhibits a confirmatory bias: when the value of a chosen option is being updated, estimates are revised more radically following positive than negative reward prediction errors, but the converse is observed when updating the unchosen option value estimate. Here, we simulate performance on a multi-arm bandit task to examine the consequences of a confirmatory bias for reward harvesting. We report a paradoxical finding: that confirmatory biases allow the agent to maximize reward relative to an unbiased updating rule. This principle holds over a wide range of experimental settings and is most influential when decisions are corrupted by noise. We show that this occurs because on average, confirmatory biases lead to overestimating the value of more valuable bandits and underestimating the value of less valuable bandits, rendering decisions overall more robust in the face of noise. Our results show how apparently suboptimal learning rules can in fact be reward maximizing if decisions are made with finite computational precision.

摘要

强化学习涉及根据经验更新状态和动作价值的估计。以前的工作表明,在人类中,强化学习表现出确认偏差:当选择选项的价值正在被更新时,与负面奖励预测误差相比,正奖励预测误差之后的估计会更激进地修正,但在更新未选中选项的价值估计时则会观察到相反的情况。在这里,我们模拟多臂赌博任务的表现,以检查确认偏差对奖励获取的后果。我们报告了一个矛盾的发现:确认偏差允许代理人相对于无偏差的更新规则最大化奖励。这一原则适用于广泛的实验设置,并且在决策受到噪声干扰时最具影响力。我们表明,这是因为平均而言,确认偏差导致高估更有价值的赌博机的价值,低估价值较低的赌博机的价值,从而使决策在面对噪声时更加稳健。我们的结果表明,如果决策是在有限的计算精度下做出的,那么显然次优的学习规则实际上可以实现奖励最大化。

相似文献

6
Effort Reinforces Learning.努力强化学习。
J Neurosci. 2022 Oct 5;42(40):7648-7658. doi: 10.1523/JNEUROSCI.2223-21.2022. Epub 2022 Sep 12.
10
Choice history effects in mice and humans improve reward harvesting efficiency.在老鼠和人类中,选择历史效应可提高奖励收获效率。
PLoS Comput Biol. 2021 Oct 4;17(10):e1009452. doi: 10.1371/journal.pcbi.1009452. eCollection 2021 Oct.

引用本文的文献

2
Understanding learning through uncertainty and bias.通过不确定性和偏差来理解学习。
Commun Psychol. 2025 Feb 13;3(1):24. doi: 10.1038/s44271-025-00203-y.
5
A Competition of Critics in Human Decision-Making.人类决策中的批评者竞争
Comput Psychiatr. 2021 Aug 12;5(1):81-101. doi: 10.5334/cpsy.64. eCollection 2021.
7
The roots of polarization in the individual reward system.个体奖励系统中极化的根源。
Proc Biol Sci. 2024 Feb 28;291(2017):20232011. doi: 10.1098/rspb.2023.2011.

本文引用的文献

3
Decreased transfer of value to action in Tourette syndrome.抽动秽语综合征中价值向行动转化的减少。
Cortex. 2020 May;126:39-48. doi: 10.1016/j.cortex.2019.12.027. Epub 2020 Jan 24.
4
A distributional code for value in dopamine-based reinforcement learning.多巴胺基强化学习中的价值分布代码。
Nature. 2020 Jan;577(7792):671-675. doi: 10.1038/s41586-019-1924-6. Epub 2020 Jan 15.
6
Flexible combination of reward information across primates.灵长类动物的奖励信息的灵活组合。
Nat Hum Behav. 2019 Nov;3(11):1215-1224. doi: 10.1038/s41562-019-0714-3. Epub 2019 Sep 9.
7
Learning the payoffs and costs of actions.学习行为的收益和成本。
PLoS Comput Biol. 2019 Feb 28;15(2):e1006285. doi: 10.1371/journal.pcbi.1006285. eCollection 2019 Feb.
8
Habits without values.无价值观的习惯。
Psychol Rev. 2019 Mar;126(2):292-311. doi: 10.1037/rev0000120. Epub 2019 Jan 24.
10

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验