Suppr超能文献

使用逆强化学习预测目标导向的注意力控制

Predicting Goal-directed Attention Control Using Inverse-Reinforcement Learning.

作者信息

Zelinsky Gregory J, Chen Yupei, Ahn Seoyoung, Adeli Hossein, Yang Zhibo, Huang Lihan, Samaras Dimitrios, Hoai Minh

机构信息

Department of Psychology, Stony Brook University, Stony Brook, NY, 11794, USA.

Department of Computer Science, Stony Brook University, Stony Brook, NY, 11794, USA.

出版信息

Neuron Behav Data Anal Theory. 2021;2021. doi: 10.51628/001c.22322. Epub 2021 Apr 20.

Abstract

Understanding how goals control behavior is a question ripe for interrogation by new methods from machine learning. These methods require large and labeled datasets to train models. To annotate a large-scale image dataset with observed search fixations, we collected 16,184 fixations from people searching for either microwaves or clocks in a dataset of 4,366 images (MS-COCO). We then used this behaviorally-annotated dataset and the machine learning method of inverse-reinforcement learning (IRL) to learn target-specific reward functions and policies for these two target goals. Finally, we used these learned policies to predict the fixations of 60 new behavioral searchers (clock = 30, microwave = 30) in a disjoint test dataset of kitchen scenes depicting both a microwave and a clock (thus controlling for differences in low-level image contrast). We found that the IRL model predicted behavioral search efficiency and fixation-density maps using multiple metrics. Moreover, reward maps from the IRL model revealed target-specific patterns that suggest, not just attention guidance by target features, but also guidance by scene context (e.g., fixations along walls in the search of clocks). Using machine learning and the psychologically meaningful principle of reward, it is possible to learn the visual features used in goal-directed attention control.

摘要

理解目标如何控制行为是一个亟待用机器学习新方法进行探究的问题。这些方法需要大规模的带标签数据集来训练模型。为了用观察到的搜索注视点标注一个大规模图像数据集,我们在一个包含4366张图像的数据集(MS-COCO)中,收集了人们搜索微波炉或时钟时的16184个注视点。然后,我们使用这个经过行为标注的数据集以及逆强化学习(IRL)这种机器学习方法,来学习这两个目标的特定目标奖励函数和策略。最后,我们使用这些学到的策略,在一个描绘了微波炉和时钟的厨房场景的不相交测试数据集中,预测60名新的行为搜索者(时钟 = 30名,微波炉 = 30名)的注视点(从而控制低层次图像对比度的差异)。我们发现,IRL模型使用多种指标预测了行为搜索效率和注视密度图。此外,IRL模型的奖励图揭示了特定目标的模式,这表明,不仅有目标特征对注意力的引导,还有场景上下文的引导(例如,在搜索时钟时沿着墙壁的注视点)。利用机器学习和具有心理意义的奖励原则,有可能学习到目标导向注意力控制中使用的视觉特征。

相似文献

1
Predicting Goal-directed Attention Control Using Inverse-Reinforcement Learning.
Neuron Behav Data Anal Theory. 2021;2021. doi: 10.51628/001c.22322. Epub 2021 Apr 20.
2
Predicting Goal-directed Human Attention Using Inverse Reinforcement Learning.
Proc IEEE Comput Soc Conf Comput Vis Pattern Recognit. 2020 Jun;2020:190-199. doi: 10.1109/cvpr42600.2020.00027. Epub 2020 Aug 5.
3
COCO-Search18 fixation dataset for predicting goal-directed attention control.
Sci Rep. 2021 Apr 22;11(1):8776. doi: 10.1038/s41598-021-87715-9.
4
Target-absent Human Attention.
Comput Vis ECCV. 2022 Oct;13664:52-68. doi: 10.1007/978-3-031-19772-7_4. Epub 2022 Oct 23.
5
Saliency Prediction on Omnidirectional Image With Generative Adversarial Imitation Learning.
IEEE Trans Image Process. 2021;30:2087-2102. doi: 10.1109/TIP.2021.3050861. Epub 2021 Jan 21.
6
A Model of the Superior Colliculus Predicts Fixation Locations during Scene Viewing and Visual Search.
J Neurosci. 2017 Feb 8;37(6):1453-1467. doi: 10.1523/JNEUROSCI.0825-16.2016. Epub 2016 Dec 30.
8
What stands out in a scene? A study of human explicit saliency judgment.
Vision Res. 2013 Oct 18;91:62-77. doi: 10.1016/j.visres.2013.07.016. Epub 2013 Aug 15.
9
Goal-Directed and Habit-Like Modulations of Stimulus Processing during Reinforcement Learning.
J Neurosci. 2017 Mar 15;37(11):3009-3017. doi: 10.1523/JNEUROSCI.3205-16.2017. Epub 2017 Feb 13.
10
Predicting the eye fixation locations in the gray scale images in the visual scenes with different semantic contents.
Cogn Neurodyn. 2016 Feb;10(1):31-47. doi: 10.1007/s11571-015-9357-x. Epub 2015 Oct 7.

引用本文的文献

1
Searching for meaning: Local scene semantics guide attention during natural visual search in scenes.
Q J Exp Psychol (Hove). 2023 Mar;76(3):632-648. doi: 10.1177/17470218221101334. Epub 2022 Jun 8.
4
Domain Adaptation for Imitation Learning Using Generative Adversarial Network.
Sensors (Basel). 2021 Jul 9;21(14):4718. doi: 10.3390/s21144718.
5
Attention in Psychology, Neuroscience, and Machine Learning.
Front Comput Neurosci. 2020 Apr 16;14:29. doi: 10.3389/fncom.2020.00029. eCollection 2020.

本文引用的文献

1
Predicting Goal-directed Human Attention Using Inverse Reinforcement Learning.
Proc IEEE Comput Soc Conf Comput Vis Pattern Recognit. 2020 Jun;2020:190-199. doi: 10.1109/cvpr42600.2020.00027. Epub 2020 Aug 5.
2
COCO-Search18 fixation dataset for predicting goal-directed attention control.
Sci Rep. 2021 Apr 22;11(1):8776. doi: 10.1038/s41598-021-87715-9.
3
Finding any Waldo with zero-shot invariant and efficient visual search.
Nat Commun. 2018 Sep 13;9(1):3730. doi: 10.1038/s41467-018-06217-x.
4
What Do Different Evaluation Metrics Tell Us About Saliency Models?
IEEE Trans Pattern Anal Mach Intell. 2019 Mar;41(3):740-757. doi: 10.1109/TPAMI.2018.2815601. Epub 2018 Mar 13.
5
Actions in the Eye: Dynamic Gaze Datasets and Learnt Saliency Models for Visual Recognition.
IEEE Trans Pattern Anal Mach Intell. 2015 Jul;37(7):1408-24. doi: 10.1109/TPAMI.2014.2366154.
6
The neural basis of attentional control in visual search.
Trends Cogn Sci. 2014 Oct;18(10):526-35. doi: 10.1016/j.tics.2014.05.005. Epub 2014 Jun 11.
7
Guided Search 2.0 A revised model of visual search.
Psychon Bull Rev. 1994 Jun;1(2):202-38. doi: 10.3758/BF03200774.
9
A value-driven mechanism of attentional selection.
J Vis. 2013 Apr 15;13(3):7. doi: 10.1167/13.3.7.

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验