Suppr超能文献

相似文献

1
Look at what I can do: Object affordances guide visual attention while speakers describe potential actions.
Atten Percept Psychophys. 2022 Jul;84(5):1583-1610. doi: 10.3758/s13414-022-02467-6. Epub 2022 Apr 28.
2
Where the action could be: Speakers look at graspable objects and meaningful scene regions when describing potential actions.
J Exp Psychol Learn Mem Cogn. 2020 Sep;46(9):1659-1681. doi: 10.1037/xlm0000837. Epub 2020 Apr 9.
4
Semantic guidance of eye movements in real-world scenes.
Vision Res. 2011 May 25;51(10):1192-205. doi: 10.1016/j.visres.2011.03.010. Epub 2011 Mar 21.
6
Visual attention during seeing for speaking in healthy aging.
Psychol Aging. 2023 Feb;38(1):49-66. doi: 10.1037/pag0000718. Epub 2022 Nov 17.
7
Rapid Extraction of the Spatial Distribution of Physical Saliency and Semantic Informativeness from Natural Scenes in the Human Brain.
J Neurosci. 2022 Jan 5;42(1):97-108. doi: 10.1523/JNEUROSCI.0602-21.2021. Epub 2021 Nov 8.
8
Looking for Semantic Similarity: What a Vector-Space Model of Semantics Can Tell Us About Attention in Real-World Scenes.
Psychol Sci. 2021 Aug;32(8):1262-1270. doi: 10.1177/0956797621994768. Epub 2021 Jul 12.
9
Searching for meaning: Local scene semantics guide attention during natural visual search in scenes.
Q J Exp Psychol (Hove). 2023 Mar;76(3):632-648. doi: 10.1177/17470218221101334. Epub 2022 Jun 8.
10
The roles of scene gist and spatial dependency among objects in the semantic guidance of attention in real-world scenes.
Vision Res. 2014 Dec;105:10-20. doi: 10.1016/j.visres.2014.08.019. Epub 2014 Sep 6.

引用本文的文献

1
CookAR: Affordance Augmentations in Wearable AR to Support Kitchen Tool Interactions for People with Low Vision.
Proc ACM Symp User Interface Softw Tech. 2024 Oct;2024. doi: 10.1145/3654777.3676449. Epub 2024 Oct 11.
2
The label-feedback effect is influenced by target category in visual search.
PLoS One. 2024 Aug 1;19(8):e0306736. doi: 10.1371/journal.pone.0306736. eCollection 2024.
3
Speakers prioritise affordance-based object semantics in scene descriptions.
Lang Cogn Neurosci. 2023;38(8):1045-1067. doi: 10.1080/23273798.2023.2190136. Epub 2023 Mar 30.

本文引用的文献

1
Looking for Semantic Similarity: What a Vector-Space Model of Semantics Can Tell Us About Attention in Real-World Scenes.
Psychol Sci. 2021 Aug;32(8):1262-1270. doi: 10.1177/0956797621994768. Epub 2021 Jul 12.
4
When more is more: redundant modifiers can facilitate visual search.
Cogn Res Princ Implic. 2021 Feb 17;6(1):10. doi: 10.1186/s41235-021-00275-4.
5
Large-scale dissociations between views of objects, scenes, and reachable-scale environments in visual cortex.
Proc Natl Acad Sci U S A. 2020 Nov 24;117(47):29354-29362. doi: 10.1073/pnas.1912333117.
6
Why do we retrace our visual steps? Semantic and episodic memory in gaze reinstatement.
Learn Mem. 2020 Jun 15;27(7):275-283. doi: 10.1101/lm.051227.119. Print 2020 Jul.
8
Where the action could be: Speakers look at graspable objects and meaningful scene regions when describing potential actions.
J Exp Psychol Learn Mem Cogn. 2020 Sep;46(9):1659-1681. doi: 10.1037/xlm0000837. Epub 2020 Apr 9.
9
The grounding of abstract concepts in the motor and visual system: An fMRI study.
Cortex. 2020 Mar;124:1-22. doi: 10.1016/j.cortex.2019.10.014. Epub 2019 Nov 13.
10
Center bias outperforms image salience but not semantics in accounting for attention during scene viewing.
Atten Percept Psychophys. 2020 Jun;82(3):985-994. doi: 10.3758/s13414-019-01849-7.

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验