• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

论学习成为一名成功的失败者:损失领域中学习过程的替代抽象概念比较

On Learning To Become a Successful Loser: A Comparison of Alternative Abstractions of Learning Processes in the Loss Domain.

作者信息

Bereby-Meyer Y, Erev I

机构信息

Technion-Israel Institute of Technology

出版信息

J Math Psychol. 1998 Jun;42(2/3):266-86. doi: 10.1006/jmps.1998.1214.

DOI:10.1006/jmps.1998.1214
PMID:9710551
Abstract

One of the main difficulties in the development of descriptive models of learning in repeated choice tasks involves the abstraction of the effect of losses. The present paper explains this difficulty, summarizes its common solutions, and presents an experiment that was designed to compare the descriptive power of the specific quantifications of these solutions proposed in recent research. The experiment utilized a probability learning task. In each of the experiment's 500 trials participants were asked to predict the appearance of one of two colors. The probabilities of appearance of the colors were different but fixed during the entire experiment. The experimental manipulation involved an addition of a constant to the payoffs. The results demonstrate that learning in the loss domain can be faster than learning in the gain domain; adding a constant to the payoff matrix can affect the learning process. These results are consistent with Erev & Roth's (1996) adjustable reference point abstraction of the effect of losses, and violate all other models. Copyright 1998 Academic Press.

摘要

在重复选择任务中开发学习描述模型的主要困难之一涉及损失效应的抽象。本文解释了这一困难,总结了其常见解决方案,并呈现了一项实验,该实验旨在比较近期研究中提出的这些解决方案的具体量化的描述能力。该实验采用了概率学习任务。在实验的500次试验中的每一次,参与者都被要求预测两种颜色之一的出现。颜色出现的概率不同,但在整个实验过程中是固定的。实验操作涉及给收益添加一个常数。结果表明,在损失领域的学习可能比在收益领域的学习更快;给收益矩阵添加一个常数会影响学习过程。这些结果与埃雷夫和罗斯(1996年)关于损失效应的可调整参考点抽象一致,并且与所有其他模型相悖。版权所有1998年学术出版社。

相似文献

1
On Learning To Become a Successful Loser: A Comparison of Alternative Abstractions of Learning Processes in the Loss Domain.论学习成为一名成功的失败者:损失领域中学习过程的替代抽象概念比较
J Math Psychol. 1998 Jun;42(2/3):266-86. doi: 10.1006/jmps.1998.1214.
2
A further test of sequential-sampling models that account for payoff effects on response bias in perceptual decision tasks.对解释感知决策任务中收益对反应偏差影响的序贯抽样模型的进一步测试。
Percept Psychophys. 2008 Feb;70(2):229-56. doi: 10.3758/pp.70.2.229.
3
Accidents and Decision Making under Uncertainty: A Comparison of Four Models.事故与不确定性下的决策制定:四种模型的比较
Organ Behav Hum Decis Process. 1998 May;74(2):118-44. doi: 10.1006/obhd.1998.2772.
4
The Effects of Framing, Reflection, Probability, and Payoff on Risk Preference in Choice Tasks.框架、反思、概率和收益对选择任务中风险偏好的影响。
Organ Behav Hum Decis Process. 1999 Jun;78(3):204-231. doi: 10.1006/obhd.1999.2830.
5
Allocation of effort as a function of payoffs for individual tasks in a multitasking environment.在多任务环境中,作为单个任务回报函数的努力分配。
Behav Res Methods. 2009 Aug;41(3):705-16. doi: 10.3758/BRM.41.3.705.
6
Visually defining and querying consistent multi-granular clinical temporal abstractions.直观定义和查询一致的多粒度临床时间抽象。
Artif Intell Med. 2012 Feb;54(2):75-101. doi: 10.1016/j.artmed.2011.10.004. Epub 2011 Dec 15.
7
Toward Generalization of Automated Temporal Abstraction to Partially Observable Reinforcement Learning.迈向部分可观察强化学习中自动化时间抽象的泛化。
IEEE Trans Cybern. 2015 Aug;45(8):1414-25. doi: 10.1109/TCYB.2014.2352038. Epub 2014 Sep 9.
8
Decision making under uncertainty: a comparison of simple scalability, fixed-sample, and sequential-sampling models.不确定性下的决策:简单可扩展性模型、固定样本模型和序贯抽样模型的比较
J Exp Psychol Learn Mem Cogn. 1985 Jul;11(3):538-64. doi: 10.1037//0278-7393.11.3.538.
9
On adaptation, maximization, and reinforcement learning among cognitive strategies.论认知策略中的适应性、最大化与强化学习。
Psychol Rev. 2005 Oct;112(4):912-931. doi: 10.1037/0033-295X.112.4.912.
10
Predictive Movements and Human Reinforcement Learning of Sequential Action.顺序动作的预测性运动与人类强化学习
Cogn Sci. 2018 Jun;42 Suppl 3(Suppl Suppl 3):783-808. doi: 10.1111/cogs.12599. Epub 2018 Mar 2.

引用本文的文献

1
Prosocial Gains and Losses: Modulations of Human Social Decision-Making by Loss-Gain Context.亲社会的得与失:得失情境对人类社会决策的调节作用
Front Psychol. 2021 Oct 28;12:755910. doi: 10.3389/fpsyg.2021.755910. eCollection 2021.
2
Contrasting temporal difference and opportunity cost reinforcement learning in an empirical money-emergence paradigm.在经验货币涌现范式中对比时间差异和机会成本强化学习。
Proc Natl Acad Sci U S A. 2018 Dec 4;115(49):E11446-E11454. doi: 10.1073/pnas.1813197115. Epub 2018 Nov 15.
3
Acceptable losses: the debatable origins of loss aversion.
可接受的损失:损失规避的有争议起源。
Psychol Res. 2019 Oct;83(7):1327-1339. doi: 10.1007/s00426-018-1013-8. Epub 2018 Apr 16.
4
Exploration and recency as the main proximate causes of probability matching: a reinforcement learning analysis.探索和近期性是概率匹配的主要近因:强化学习分析。
Sci Rep. 2017 Nov 10;7(1):15326. doi: 10.1038/s41598-017-15587-z.
5
To Take Risk is to Face Loss: A Tonic Pupillometry Study.冒险即面临损失:一项瞳孔测量研究
Front Psychol. 2011 Nov 22;2:344. doi: 10.3389/fpsyg.2011.00344. eCollection 2011.
6
Recombination and the evolution of coordinated phenotypic expression in a frequency-dependent game.频率依赖博弈中重组与协同表型表达的进化
Theor Popul Biol. 2011 Dec;80(4):244-55. doi: 10.1016/j.tpb.2011.09.001. Epub 2011 Sep 14.
7
Evolution of social learning when high expected payoffs are associated with high risk of failure.当高预期收益与高失败风险相关联时,社会学习的演变。
J R Soc Interface. 2011 Nov 7;8(64):1604-15. doi: 10.1098/rsif.2011.0138. Epub 2011 Apr 20.
8
Optimizing vs. matching: response strategy in a probabilistic learning task is associated with negative symptoms of schizophrenia.优化与匹配:概率学习任务中的反应策略与精神分裂症的阴性症状有关。
Schizophr Res. 2011 Apr;127(1-3):215-22. doi: 10.1016/j.schres.2010.12.003. Epub 2011 Jan 15.
9
Co-evolution of learning complexity and social foraging strategies.学习复杂度和社会觅食策略的共同进化。
J Theor Biol. 2010 Dec 21;267(4):573-81. doi: 10.1016/j.jtbi.2010.09.026. Epub 2010 Sep 19.
10
Doomed to repeat the successes of the past: history is best forgotten for repeated choices with nonstationary payoffs.注定要重复过去的成功:对于具有非平稳收益的重复选择而言,最好忘掉历史。
Mem Cognit. 2009 Oct;37(7):985-1000. doi: 10.3758/MC.37.7.985.