• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

人类价值学习和表示反映了对任务需求的理性适应。

Human value learning and representation reflect rational adaptation to task demands.

机构信息

Department of Experimental Psychology, University of Oxford, Oxford, UK.

St John's College, University of Oxford, Oxford, UK.

出版信息

Nat Hum Behav. 2022 Sep;6(9):1268-1279. doi: 10.1038/s41562-022-01360-4. Epub 2022 May 30.

DOI:10.1038/s41562-022-01360-4
PMID:35637297
Abstract

Humans and other animals routinely make choices between goods of different values. Choices are often made within identifiable contexts, such that an efficient learner may represent values relative to their local context. However, if goods occur across multiple contexts, a relative value code can lead to irrational choices. In this case, an absolute context-independent value is preferable to a relative code. Here we test the hypothesis that value representation is not fixed but rationally adapted to context expectations. In two experiments, we manipulated participants' expectations about whether item values learned within local contexts would need to be subsequently compared across contexts. Despite identical learning experiences, the group whose expectations included choices across local contexts went on to learn more absolute-like representation than the group whose expectations covered only fixed local contexts. Human value representation is thus neither relative nor absolute but efficiently and rationally tuned to task demands.

摘要

人类和其他动物经常在不同价值的物品之间做出选择。选择通常是在可识别的情境下做出的,因此一个有效的学习者可以根据其局部情境来表示价值。然而,如果物品出现在多个情境中,相对价值代码可能会导致不合理的选择。在这种情况下,绝对的、不依赖于情境的价值优于相对的代码。在这里,我们检验了这样一个假设,即价值表示不是固定的,而是可以根据情境期望进行合理调整。在两个实验中,我们操纵了参与者关于在局部情境中学习的项目价值是否需要随后在情境间进行比较的预期。尽管有相同的学习经验,但预期包括局部情境间选择的那组人比预期仅包括固定局部情境的那组人更能学习到更类似于绝对的表示。因此,人类的价值表示既不是相对的,也不是绝对的,而是根据任务需求进行有效和合理的调整。

相似文献

1
Human value learning and representation reflect rational adaptation to task demands.人类价值学习和表示反映了对任务需求的理性适应。
Nat Hum Behav. 2022 Sep;6(9):1268-1279. doi: 10.1038/s41562-022-01360-4. Epub 2022 May 30.
2
The Effect of Counterfactual Information on Outcome Value Coding in Medial Prefrontal and Cingulate Cortex: From an Absolute to a Relative Neural Code.反事实信息对内侧前额叶和扣带回皮层结果价值编码的影响:从绝对神经编码到相对神经编码。
J Neurosci. 2020 Apr 15;40(16):3268-3277. doi: 10.1523/JNEUROSCI.1712-19.2020. Epub 2020 Mar 10.
3
Neural oscillations reflect latent learning states underlying dual-context sensorimotor adaptation.神经振荡反映了双情景感觉运动适应的潜在学习状态。
Neuroimage. 2017 Dec;163:93-105. doi: 10.1016/j.neuroimage.2017.09.026. Epub 2017 Sep 15.
4
BOLD subjective value signals exhibit robust range adaptation.血氧水平依赖性功能磁共振成像主观价值信号表现出强大的范围适应性。
J Neurosci. 2014 Dec 3;34(49):16533-43. doi: 10.1523/JNEUROSCI.3927-14.2014.
5
Learning to represent a multi-context environment: more than detecting changes.学习表征多上下文环境:不仅仅是检测变化。
Front Psychol. 2012 Jul 20;3:228. doi: 10.3389/fpsyg.2012.00228. eCollection 2012.
6
Partial Adaptation of Obtained and Observed Value Signals Preserves Information about Gains and Losses.获得值信号和观测值信号的部分适配保留了关于收益和损失的信息。
J Neurosci. 2016 Sep 28;36(39):10016-25. doi: 10.1523/JNEUROSCI.0487-16.2016.
7
Training diversity promotes absolute-value-guided choice.训练多样性促进绝对值引导选择。
PLoS Comput Biol. 2022 Nov 2;18(11):e1010664. doi: 10.1371/journal.pcbi.1010664. eCollection 2022 Nov.
8
Relative errors can cue absolute visuomotor mappings.相对误差可以提示绝对视觉运动映射。
Exp Brain Res. 2015 Dec;233(12):3367-77. doi: 10.1007/s00221-015-4403-9. Epub 2015 Aug 18.
9
"Context-dependent learning in social interaction: Trait impressions support flexible social choices": Correction to Hackel et al. (2022).“社会互动中的情境依赖学习:特质印象支持灵活的社会选择”:对 Hackel 等人(2022)的更正。
J Pers Soc Psychol. 2022 Oct;123(4):675. doi: 10.1037/pspa0000325.
10
Context-sensitive valuation and learning.情境敏感估值与学习
Curr Opin Behav Sci. 2021 Oct;41:122-127. doi: 10.1016/j.cobeha.2021.05.001. Epub 2021 Jun 9.

引用本文的文献

1
The timescale and direction of influence of a third inferior alternative in human value-learning.人类价值学习中第三种次优选择的影响时间尺度和方向。
Commun Psychol. 2025 Apr 5;3(1):56. doi: 10.1038/s44271-025-00229-2.
2
Comparing experience- and description-based economic preferences across 11 countries.比较 11 个国家基于经验和描述的经济偏好。
Nat Hum Behav. 2024 Aug;8(8):1554-1567. doi: 10.1038/s41562-024-01894-9. Epub 2024 Jun 14.
3
Intrinsic rewards explain context-sensitive valuation in reinforcement learning.内在奖励解释了强化学习中的情境敏感估值。

本文引用的文献

1
Asymmetric reinforcement learning facilitates human inference of transitive relations.非对称强化学习有助于人类推断传递关系。
Nat Hum Behav. 2022 Apr;6(4):555-564. doi: 10.1038/s41562-021-01263-w. Epub 2022 Jan 31.
2
The case against economic values in the orbitofrontal cortex (or anywhere else in the brain).反对眶额皮质(或大脑其他任何区域)存在经济价值的观点。
Behav Neurosci. 2021 Apr;135(2):192-201. doi: 10.1037/bne0000448.
3
Two sides of the same coin: Beneficial and detrimental consequences of range adaptation in human reinforcement learning.
PLoS Biol. 2023 Jul 17;21(7):e3002201. doi: 10.1371/journal.pbio.3002201. eCollection 2023 Jul.
4
The functional form of value normalization in human reinforcement learning.人类强化学习中的价值归一化的函数形式。
Elife. 2023 Jul 10;12:e83891. doi: 10.7554/eLife.83891.
5
The Future of Decisions From Experience: Connecting Real-World Decision Problems to Cognitive Processes.经验决策的未来:将现实世界的决策问题与认知过程联系起来。
Perspect Psychol Sci. 2024 Jan;19(1):82-102. doi: 10.1177/17456916231179138. Epub 2023 Jun 30.
6
Outcome context-dependence is not WEIRD: Comparing reinforcement- and description-based economic preferences worldwide.结果情境依赖性并非怪异现象:比较全球基于强化和描述的经济偏好。
Res Sq. 2023 Mar 2:rs.3.rs-2621222. doi: 10.21203/rs.3.rs-2621222/v1.
7
Training diversity promotes absolute-value-guided choice.训练多样性促进绝对值引导选择。
PLoS Comput Biol. 2022 Nov 2;18(11):e1010664. doi: 10.1371/journal.pcbi.1010664. eCollection 2022 Nov.
同一枚硬币的两面:人类强化学习中范围适应的有益和有害后果。
Sci Adv. 2021 Apr 2;7(14). doi: 10.1126/sciadv.abe0340. Print 2021 Apr.
4
Neural state space alignment for magnitude generalization in humans and recurrent networks.用于人类和递归网络中幅度泛化的神经状态空间对准。
Neuron. 2021 Apr 7;109(7):1214-1226.e8. doi: 10.1016/j.neuron.2021.02.004. Epub 2021 Feb 23.
5
Optimal utility and probability functions for agents with finite computational precision.具有有限计算精度的主体的最优效用和概率函数。
Proc Natl Acad Sci U S A. 2021 Jan 12;118(2). doi: 10.1073/pnas.2002232118.
6
Beyond dichotomies in reinforcement learning.超越强化学习中的二分法。
Nat Rev Neurosci. 2020 Oct;21(10):576-586. doi: 10.1038/s41583-020-0355-6. Epub 2020 Sep 1.
7
Value-based attention but not divisive normalization influences decisions with multiple alternatives.基于价值的注意力而非分歧归一化影响具有多个备选方案的决策。
Nat Hum Behav. 2020 Jun;4(6):634-645. doi: 10.1038/s41562-020-0822-0. Epub 2020 Feb 3.
8
How to Change the Weight of Rare Events in Decisions From Experience.如何在经验型决策中改变稀有事件的权重。
Psychol Sci. 2019 Dec;30(12):1767-1779. doi: 10.1177/0956797619884324. Epub 2019 Nov 14.
9
Reference effects on decision-making elicited by previous rewards.先前奖励引发的决策中的参照效应。
Cognition. 2019 Nov;192:104034. doi: 10.1016/j.cognition.2019.104034. Epub 2019 Aug 3.
10
Neural structure mapping in human probabilistic reward learning.人类概率奖励学习中的神经结构映射。
Elife. 2019 Mar 7;8:e42816. doi: 10.7554/eLife.42816.