Paskewitz Samuel, Jones Matt
Department of Psychiatry, Children's Hospital, Anschutz Medical Campus, University of Colorado Denver.
Department of Psychology and Neuroscience, University of Colorado Boulder.
J Math Psychol. 2023 Feb;112. doi: 10.1016/j.jmp.2022.102728. Epub 2022 Dec 8.
According to the theory of derived attention, organisms attend to cues with strong associations. Prior work has shown that - combined with a Rescorla-Wagner style learning mechanism - derived attention explains phenomena such as learned predictiveness, inattention to blocked cues, and value-based salience. We introduce a Bayesian derived attention model that explains a wider array of results than previous models and gives further insight into the principle of derived attention. Our approach combines Bayesian linear regression with the assumption that the associations of any cue with various outcomes share the same prior variance, which can be thought of as the inherent importance of that cue. The new model simultaneously estimates cue-outcome associations and prior variance through approximate Bayesian learning. A significant cue will develop large associations, leading the model to estimate a high prior variance and hence develop larger associations from that cue to novel outcomes. This provides a normative, statistical explanation for derived attention. Through simulation, we show that this Bayesian derived attention model not only explains the same phenomena as previous versions, but also retrospective revaluation. It also makes a novel prediction: inattention after backward blocking. We hope that further development of the Bayesian derived attention model will shed light on the complex relationship between uncertainty and predictiveness effects on attention.
根据派生注意力理论,生物体关注具有强烈关联的线索。先前的研究表明,结合雷斯克拉-瓦格纳式学习机制,派生注意力能够解释诸如习得预测性、对被阻断线索的忽视以及基于价值的显著性等现象。我们引入了一种贝叶斯派生注意力模型,该模型能够解释比以往模型更广泛的结果,并能更深入地洞察派生注意力的原理。我们的方法将贝叶斯线性回归与这样一种假设相结合,即任何线索与各种结果的关联共享相同的先验方差,这可以被视为该线索的固有重要性。新模型通过近似贝叶斯学习同时估计线索-结果关联和先验方差。一个显著的线索会形成较大的关联,导致模型估计出较高的先验方差,从而从该线索到新结果形成更大的关联。这为派生注意力提供了一种规范的、统计学的解释。通过模拟,我们表明这种贝叶斯派生注意力模型不仅能够解释与先前版本相同的现象,还能解释回溯性重估。它还做出了一个新的预测:反向阻断后的忽视。我们希望贝叶斯派生注意力模型的进一步发展将有助于阐明不确定性和预测性对注意力影响之间的复杂关系。