Suppr超能文献

SUN:使用自然统计的自上而下的显著性

SUN: Top-down saliency using natural statistics.

作者信息

Kanan Christopher, Tong Mathew H, Zhang Lingyun, Cottrell Garrison W

机构信息

Department of Computer Science and Engineering, University of California San Diego, La Jolla, CA, USA.

出版信息

Vis cogn. 2009 Aug 1;17(6-7):979-1003. doi: 10.1080/13506280902771138.

Abstract

When people try to find particular objects in natural scenes they make extensive use of knowledge about how and where objects tend to appear in a scene. Although many forms of such "top-down" knowledge have been incorporated into saliency map models of visual search, surprisingly, the role of object appearance has been infrequently investigated. Here we present an appearance-based saliency model derived in a Bayesian framework. We compare our approach with both bottom-up saliency algorithms as well as the state-of-the-art Contextual Guidance model of Torralba et al. (2006) at predicting human fixations. Although both top-down approaches use very different types of information, they achieve similar performance; each substantially better than the purely bottom-up models. Our experiments reveal that a simple model of object appearance can predict human fixations quite well, even making the same mistakes as people.

摘要

当人们试图在自然场景中寻找特定物体时,他们会广泛利用有关物体在场景中出现的方式和位置的知识。尽管许多形式的此类“自上而下”知识已被纳入视觉搜索的显著性地图模型中,但令人惊讶的是,物体外观的作用却很少被研究。在此,我们展示了一种在贝叶斯框架下推导出来的基于外观的显著性模型。我们将我们的方法与自下而上的显著性算法以及Torralba等人(2006年)的最新情境引导模型在预测人类注视点方面进行了比较。尽管这两种自上而下的方法使用非常不同类型的信息,但它们取得了相似的性能;每种方法都比纯粹的自下而上模型要好得多。我们的实验表明,一个简单的物体外观模型可以很好地预测人类注视点,甚至会犯与人类相同的错误。

相似文献

1
SUN: Top-down saliency using natural statistics.
Vis cogn. 2009 Aug 1;17(6-7):979-1003. doi: 10.1080/13506280902771138.
2
What stands out in a scene? A study of human explicit saliency judgment.
Vision Res. 2013 Oct 18;91:62-77. doi: 10.1016/j.visres.2013.07.016. Epub 2013 Aug 15.
3
Modeling Human Visual Search in Natural Scenes: A Combined Bayesian Searcher and Saliency Map Approach.
Front Syst Neurosci. 2022 May 27;16:882315. doi: 10.3389/fnsys.2022.882315. eCollection 2022.
4
SUN: A Bayesian framework for saliency using natural statistics.
J Vis. 2008 Dec 16;8(7):32.1-20. doi: 10.1167/8.7.32.
6
Reconciling Saliency and Object Center-Bias Hypotheses in Explaining Free-Viewing Fixations.
IEEE Trans Neural Netw Learn Syst. 2016 Jun;27(6):1214-26. doi: 10.1109/TNNLS.2015.2480683. Epub 2015 Oct 7.
7
Objects guide human gaze behavior in dynamic real-world scenes.
PLoS Comput Biol. 2023 Oct 26;19(10):e1011512. doi: 10.1371/journal.pcbi.1011512. eCollection 2023 Oct.
8
A proto-object-based computational model for visual saliency.
J Vis. 2013 Nov 26;13(13):27. doi: 10.1167/13.13.27.
9
Coding of saliency by ensemble bursting in the amygdala of primates.
Front Behav Neurosci. 2012 Jul 25;6:38. doi: 10.3389/fnbeh.2012.00038. eCollection 2012.
10
Fixation and saliency during search of natural scenes: the case of visual agnosia.
Neuropsychologia. 2009 Jul;47(8-9):1994-2003. doi: 10.1016/j.neuropsychologia.2009.03.013. Epub 2009 Mar 18.

引用本文的文献

1
BIAS-3D: Brain inspired attentional search model fashioned after what and where/how pathways for target search in 3D environment.
Front Comput Neurosci. 2022 Nov 18;16:1012559. doi: 10.3389/fncom.2022.1012559. eCollection 2022.
2
Where to look at the movies: Analyzing visual attention to understand movie editing.
Behav Res Methods. 2023 Sep;55(6):2940-2959. doi: 10.3758/s13428-022-01949-7. Epub 2022 Aug 24.
3
A novel fully convolutional network for visual saliency prediction.
PeerJ Comput Sci. 2020 Jul 13;6:e280. doi: 10.7717/peerj-cs.280. eCollection 2020.
4
Searching in CCTV: effects of organisation in the multiplex.
Cogn Res Princ Implic. 2021 Feb 18;6(1):11. doi: 10.1186/s41235-021-00277-2.
5
Salience Models: A Computational Cognitive Neuroscience Review.
Vision (Basel). 2019 Oct 25;3(4):56. doi: 10.3390/vision3040056.
6
Image Content Enhancement Through Salient Regions Segmentation for People With Color Vision Deficiencies.
Iperception. 2019 May 10;10(3):2041669519841073. doi: 10.1177/2041669519841073. eCollection 2019 May-Jun.
7
Joint Learning of Binocularly Driven Saccades and Vergence by Active Efficient Coding.
Front Neurorobot. 2017 Nov 3;11:58. doi: 10.3389/fnbot.2017.00058. eCollection 2017.
9
What has been missed for predicting human attention in viewing driving clips?
PeerJ. 2017 Feb 1;5:e2946. doi: 10.7717/peerj.2946. eCollection 2017.
10
Control of gaze while walking: Task structure, reward, and uncertainty.
J Vis. 2017 Jan 1;17(1):28. doi: 10.1167/17.1.28.

本文引用的文献

1
Guided Search 2.0 A revised model of visual search.
Psychon Bull Rev. 1994 Jun;1(2):202-38. doi: 10.3758/BF03200774.
2
Modeling Search for People in 900 Scenes: A combined source model of eye guidance.
Vis cogn. 2009 Aug 1;17(6-7):945-978. doi: 10.1080/13506280902834720.
3
SUN: A Bayesian framework for saliency using natural statistics.
J Vis. 2008 Dec 16;8(7):32.1-20. doi: 10.1167/8.7.32.
6
Human eye-head co-ordination in natural exploration.
Network. 2007 Sep;18(3):267-97. doi: 10.1080/09548980701671094.
7
Predicting visual fixations on video based on low-level visual features.
Vision Res. 2007 Sep;47(19):2483-98. doi: 10.1016/j.visres.2007.06.015. Epub 2007 Aug 3.
9
Where to look next? Eye movements reduce local uncertainty.
J Vis. 2007 Feb 27;7(3):6. doi: 10.1167/7.3.6.

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验