• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

强化学习中确认偏差的规范解释。

A Normative Account of Confirmation Bias During Reinforcement Learning.

机构信息

MRC Brain Network Dynamics Unit, Nuffield Department of Clinical Neurosciences, University of Oxford, Oxford OX3 9DU, U.K.

Department of Experimental Psychology, University of Oxford, Oxford OX3 9DU, U.K.

出版信息

Neural Comput. 2022 Jan 14;34(2):307-337. doi: 10.1162/neco_a_01455.

DOI:10.1162/neco_a_01455
PMID:34758486
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC7612695/
Abstract

Reinforcement learning involves updating estimates of the value of states and actions on the basis of experience. Previous work has shown that in humans, reinforcement learning exhibits a confirmatory bias: when the value of a chosen option is being updated, estimates are revised more radically following positive than negative reward prediction errors, but the converse is observed when updating the unchosen option value estimate. Here, we simulate performance on a multi-arm bandit task to examine the consequences of a confirmatory bias for reward harvesting. We report a paradoxical finding: that confirmatory biases allow the agent to maximize reward relative to an unbiased updating rule. This principle holds over a wide range of experimental settings and is most influential when decisions are corrupted by noise. We show that this occurs because on average, confirmatory biases lead to overestimating the value of more valuable bandits and underestimating the value of less valuable bandits, rendering decisions overall more robust in the face of noise. Our results show how apparently suboptimal learning rules can in fact be reward maximizing if decisions are made with finite computational precision.

摘要

强化学习涉及根据经验更新状态和动作价值的估计。以前的工作表明,在人类中,强化学习表现出确认偏差:当选择选项的价值正在被更新时,与负面奖励预测误差相比,正奖励预测误差之后的估计会更激进地修正,但在更新未选中选项的价值估计时则会观察到相反的情况。在这里,我们模拟多臂赌博任务的表现,以检查确认偏差对奖励获取的后果。我们报告了一个矛盾的发现:确认偏差允许代理人相对于无偏差的更新规则最大化奖励。这一原则适用于广泛的实验设置,并且在决策受到噪声干扰时最具影响力。我们表明,这是因为平均而言,确认偏差导致高估更有价值的赌博机的价值,低估价值较低的赌博机的价值,从而使决策在面对噪声时更加稳健。我们的结果表明,如果决策是在有限的计算精度下做出的,那么显然次优的学习规则实际上可以实现奖励最大化。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8e5d/7612695/c0a09d053b85/EMS143783-f009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8e5d/7612695/839de0fe921a/EMS143783-f001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8e5d/7612695/322f50806049/EMS143783-f002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8e5d/7612695/becbd9e12cc0/EMS143783-f003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8e5d/7612695/6c5004dbfe86/EMS143783-f004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8e5d/7612695/71d899ffcb84/EMS143783-f005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8e5d/7612695/327cb94a28e8/EMS143783-f006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8e5d/7612695/3094865115b7/EMS143783-f007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8e5d/7612695/69559fae4604/EMS143783-f008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8e5d/7612695/c0a09d053b85/EMS143783-f009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8e5d/7612695/839de0fe921a/EMS143783-f001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8e5d/7612695/322f50806049/EMS143783-f002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8e5d/7612695/becbd9e12cc0/EMS143783-f003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8e5d/7612695/6c5004dbfe86/EMS143783-f004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8e5d/7612695/71d899ffcb84/EMS143783-f005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8e5d/7612695/327cb94a28e8/EMS143783-f006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8e5d/7612695/3094865115b7/EMS143783-f007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8e5d/7612695/69559fae4604/EMS143783-f008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8e5d/7612695/c0a09d053b85/EMS143783-f009.jpg

相似文献

1
A Normative Account of Confirmation Bias During Reinforcement Learning.强化学习中确认偏差的规范解释。
Neural Comput. 2022 Jan 14;34(2):307-337. doi: 10.1162/neco_a_01455.
2
The computational roots of positivity and confirmation biases in reinforcement learning.强化学习中积极性和确认偏见的计算根源。
Trends Cogn Sci. 2022 Jul;26(7):607-621. doi: 10.1016/j.tics.2022.04.005. Epub 2022 May 31.
3
Moderate confirmation bias enhances decision-making in groups of reinforcement-learning agents.适度的确认偏差会增强强化学习智能体群体中的决策能力。
PLoS Comput Biol. 2024 Sep 4;20(9):e1012404. doi: 10.1371/journal.pcbi.1012404. eCollection 2024 Sep.
4
Confirmatory reinforcement learning changes with age during adolescence.确认性强化学习在青少年时期随年龄变化。
Dev Sci. 2023 May;26(3):e13330. doi: 10.1111/desc.13330. Epub 2022 Oct 27.
5
Active reinforcement learning versus action bias and hysteresis: control with a mixture of experts and nonexperts.主动强化学习与动作偏差和滞后的比较:混合专家与非专家的控制。
PLoS Comput Biol. 2024 Mar 29;20(3):e1011950. doi: 10.1371/journal.pcbi.1011950. eCollection 2024 Mar.
6
Effort Reinforces Learning.努力强化学习。
J Neurosci. 2022 Oct 5;42(40):7648-7658. doi: 10.1523/JNEUROSCI.2223-21.2022. Epub 2022 Sep 12.
7
Confirmation bias in human reinforcement learning: Evidence from counterfactual feedback processing.人类强化学习中的确认偏差:来自反事实反馈处理的证据。
PLoS Comput Biol. 2017 Aug 11;13(8):e1005684. doi: 10.1371/journal.pcbi.1005684. eCollection 2017 Aug.
8
Contextual influence of reinforcement learning performance of depression: evidence for a negativity bias?强化学习表现受抑郁影响的情境因素:是否存在负性偏差?
Psychol Med. 2023 Jul;53(10):4696-4706. doi: 10.1017/S0033291722001593. Epub 2022 Jun 21.
9
How pupil responses track value-based decision-making during and after reinforcement learning.瞳孔反应如何在强化学习期间和之后跟踪基于价值的决策。
PLoS Comput Biol. 2018 Nov 30;14(11):e1006632. doi: 10.1371/journal.pcbi.1006632. eCollection 2018 Nov.
10
Choice history effects in mice and humans improve reward harvesting efficiency.在老鼠和人类中,选择历史效应可提高奖励收获效率。
PLoS Comput Biol. 2021 Oct 4;17(10):e1009452. doi: 10.1371/journal.pcbi.1009452. eCollection 2021 Oct.

引用本文的文献

1
Uncertainty and reward histories have distinct effects on decisions after wins and losses.不确定性和奖励历史对输赢后的决策有不同影响。
bioRxiv. 2025 Aug 19:2025.08.14.670176. doi: 10.1101/2025.08.14.670176.
2
Understanding learning through uncertainty and bias.通过不确定性和偏差来理解学习。
Commun Psychol. 2025 Feb 13;3(1):24. doi: 10.1038/s44271-025-00203-y.
3
Moderate confirmation bias enhances decision-making in groups of reinforcement-learning agents.适度的确认偏差会增强强化学习智能体群体中的决策能力。

本文引用的文献

1
Optimal utility and probability functions for agents with finite computational precision.具有有限计算精度的主体的最优效用和概率函数。
Proc Natl Acad Sci U S A. 2021 Jan 12;118(2). doi: 10.1073/pnas.2002232118.
2
Information about action outcomes differentially affects learning from self-determined versus imposed choices.关于行动结果的信息会对自主选择和强制选择的学习产生不同的影响。
Nat Hum Behav. 2020 Oct;4(10):1067-1079. doi: 10.1038/s41562-020-0919-5. Epub 2020 Aug 3.
3
Decreased transfer of value to action in Tourette syndrome.
PLoS Comput Biol. 2024 Sep 4;20(9):e1012404. doi: 10.1371/journal.pcbi.1012404. eCollection 2024 Sep.
4
Risk preference as an outcome of evolutionarily adaptive learning mechanisms: An evolutionary simulation under diverse risky environments.风险偏好作为进化适应性学习机制的结果:多样化风险环境下的进化模拟
PLoS One. 2024 Aug 1;19(8):e0307991. doi: 10.1371/journal.pone.0307991. eCollection 2024.
5
A Competition of Critics in Human Decision-Making.人类决策中的批评者竞争
Comput Psychiatr. 2021 Aug 12;5(1):81-101. doi: 10.5334/cpsy.64. eCollection 2021.
6
Active reinforcement learning versus action bias and hysteresis: control with a mixture of experts and nonexperts.主动强化学习与动作偏差和滞后的比较:混合专家与非专家的控制。
PLoS Comput Biol. 2024 Mar 29;20(3):e1011950. doi: 10.1371/journal.pcbi.1011950. eCollection 2024 Mar.
7
The roots of polarization in the individual reward system.个体奖励系统中极化的根源。
Proc Biol Sci. 2024 Feb 28;291(2017):20232011. doi: 10.1098/rspb.2023.2011.
8
Confirmatory reinforcement learning changes with age during adolescence.确认性强化学习在青少年时期随年龄变化。
Dev Sci. 2023 May;26(3):e13330. doi: 10.1111/desc.13330. Epub 2022 Oct 27.
9
Efficient stabilization of imprecise statistical inference through conditional belief updating.通过条件置信更新实现非精确统计推断的有效稳定化。
Nat Hum Behav. 2022 Dec;6(12):1691-1704. doi: 10.1038/s41562-022-01445-0. Epub 2022 Sep 22.
10
As within, so without, as above, so below: Common mechanisms can support between- and within-trial category learning dynamics.心物一如,上下同理:共同的机制可以支持试验内和试验间的类别学习动态。
Psychol Rev. 2022 Oct;129(5):1104-1143. doi: 10.1037/rev0000381. Epub 2022 Jul 18.
抽动秽语综合征中价值向行动转化的减少。
Cortex. 2020 May;126:39-48. doi: 10.1016/j.cortex.2019.12.027. Epub 2020 Jan 24.
4
A distributional code for value in dopamine-based reinforcement learning.多巴胺基强化学习中的价值分布代码。
Nature. 2020 Jan;577(7792):671-675. doi: 10.1038/s41586-019-1924-6. Epub 2020 Jan 15.
5
Computational noise in reward-guided learning drives behavioral variability in volatile environments.奖励导向学习中的计算噪声驱动易变环境中的行为可变性。
Nat Neurosci. 2019 Dec;22(12):2066-2077. doi: 10.1038/s41593-019-0518-9. Epub 2019 Oct 28.
6
Flexible combination of reward information across primates.灵长类动物的奖励信息的灵活组合。
Nat Hum Behav. 2019 Nov;3(11):1215-1224. doi: 10.1038/s41562-019-0714-3. Epub 2019 Sep 9.
7
Learning the payoffs and costs of actions.学习行为的收益和成本。
PLoS Comput Biol. 2019 Feb 28;15(2):e1006285. doi: 10.1371/journal.pcbi.1006285. eCollection 2019 Feb.
8
Habits without values.无价值观的习惯。
Psychol Rev. 2019 Mar;126(2):292-311. doi: 10.1037/rev0000120. Epub 2019 Jan 24.
9
Selective Effects of the Loss of NMDA or mGluR5 Receptors in the Reward System on Adaptive Decision-Making.选择性敲除 NMDA 或 mGluR5 受体对奖赏系统适应性决策的影响。
eNeuro. 2018 Oct 5;5(4). doi: 10.1523/ENEURO.0331-18.2018. eCollection 2018 Jul-Aug.
10
Confirmation Bias through Selective Overweighting of Choice-Consistent Evidence.选择性重视与选择一致的证据导致确认偏误。
Curr Biol. 2018 Oct 8;28(19):3128-3135.e8. doi: 10.1016/j.cub.2018.07.052. Epub 2018 Sep 13.