Suppr超能文献

衍生注意力的统计基础。

A Statistical Foundation for Derived Attention.

作者信息

Paskewitz Samuel, Jones Matt

机构信息

Department of Psychiatry, Children's Hospital, Anschutz Medical Campus, University of Colorado Denver.

Department of Psychology and Neuroscience, University of Colorado Boulder.

出版信息

J Math Psychol. 2023 Feb;112. doi: 10.1016/j.jmp.2022.102728. Epub 2022 Dec 8.

Abstract

According to the theory of derived attention, organisms attend to cues with strong associations. Prior work has shown that - combined with a Rescorla-Wagner style learning mechanism - derived attention explains phenomena such as learned predictiveness, inattention to blocked cues, and value-based salience. We introduce a Bayesian derived attention model that explains a wider array of results than previous models and gives further insight into the principle of derived attention. Our approach combines Bayesian linear regression with the assumption that the associations of any cue with various outcomes share the same prior variance, which can be thought of as the inherent importance of that cue. The new model simultaneously estimates cue-outcome associations and prior variance through approximate Bayesian learning. A significant cue will develop large associations, leading the model to estimate a high prior variance and hence develop larger associations from that cue to novel outcomes. This provides a normative, statistical explanation for derived attention. Through simulation, we show that this Bayesian derived attention model not only explains the same phenomena as previous versions, but also retrospective revaluation. It also makes a novel prediction: inattention after backward blocking. We hope that further development of the Bayesian derived attention model will shed light on the complex relationship between uncertainty and predictiveness effects on attention.

摘要

根据派生注意力理论,生物体关注具有强烈关联的线索。先前的研究表明,结合雷斯克拉-瓦格纳式学习机制,派生注意力能够解释诸如习得预测性、对被阻断线索的忽视以及基于价值的显著性等现象。我们引入了一种贝叶斯派生注意力模型,该模型能够解释比以往模型更广泛的结果,并能更深入地洞察派生注意力的原理。我们的方法将贝叶斯线性回归与这样一种假设相结合,即任何线索与各种结果的关联共享相同的先验方差,这可以被视为该线索的固有重要性。新模型通过近似贝叶斯学习同时估计线索-结果关联和先验方差。一个显著的线索会形成较大的关联,导致模型估计出较高的先验方差,从而从该线索到新结果形成更大的关联。这为派生注意力提供了一种规范的、统计学的解释。通过模拟,我们表明这种贝叶斯派生注意力模型不仅能够解释与先前版本相同的现象,还能解释回溯性重估。它还做出了一个新的预测:反向阻断后的忽视。我们希望贝叶斯派生注意力模型的进一步发展将有助于阐明不确定性和预测性对注意力影响之间的复杂关系。

相似文献

1
A Statistical Foundation for Derived Attention.衍生注意力的统计基础。
J Math Psychol. 2023 Feb;112. doi: 10.1016/j.jmp.2022.102728. Epub 2022 Dec 8.
8
ecco: An error correcting comparator theory.埃科:一种纠错比较器理论。
Behav Processes. 2018 Sep;154:36-44. doi: 10.1016/j.beproc.2018.03.009. Epub 2018 Mar 8.

本文引用的文献

2
Dissecting EXIT.剖析产时宫外治疗手术
J Math Psychol. 2020 Aug;97. doi: 10.1016/j.jmp.2020.102371. Epub 2020 May 12.
4
Attention and associative learning in humans: An integrative review.人类的注意和联想学习:综合评述。
Psychol Bull. 2016 Oct;142(10):1111-1140. doi: 10.1037/bul0000064. Epub 2016 Aug 8.
5
A Unifying Probabilistic View of Associative Learning.联想学习的统一概率观
PLoS Comput Biol. 2015 Nov 4;11(11):e1004567. doi: 10.1371/journal.pcbi.1004567. eCollection 2015 Nov.
9
Attentional mechanisms in learned predictiveness.习得性预测中的注意力机制。
J Exp Psychol Anim Behav Process. 2012 Apr;38(2):191-202. doi: 10.1037/a0027385. Epub 2012 Feb 27.

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验