• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

场景语义会影响自然(虚拟)环境中动作的以自我为中心的空间编码。

Scene semantics affects allocentric spatial coding for action in naturalistic (virtual) environments.

机构信息

Department of Experimental Psychology, Justus Liebig University Giessen, Otto-Behaghel-Strasse 10F, 35394, Giessen, Hesse, Germany.

Department of Psychology, Goethe University Frankfurt, 60323, Frankfurt am Main, Hesse, Germany.

出版信息

Sci Rep. 2024 Jul 5;14(1):15549. doi: 10.1038/s41598-024-66428-9.

DOI:10.1038/s41598-024-66428-9
PMID:38969745
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC11226608/
Abstract

Interacting with objects in our environment requires determining their locations, often with respect to surrounding objects (i.e., allocentrically). According to the scene grammar framework, these usually small, local objects are movable within a scene and represent the lowest level of a scene's hierarchy. How do higher hierarchical levels of scene grammar influence allocentric coding for memory-guided actions? Here, we focused on the effect of large, immovable objects (anchors) on the encoding of local object positions. In a virtual reality study, participants (n = 30) viewed one of four possible scenes (two kitchens or two bathrooms), with two anchors connected by a shelf, onto which were presented three local objects (congruent with one anchor) (Encoding). The scene was re-presented (Test) with 1) local objects missing and 2) one of the anchors shifted (Shift) or not (No shift). Participants, then, saw a floating local object (target), which they grabbed and placed back on the shelf in its remembered position (Response). Eye-tracking data revealed that both local objects and anchors were fixated, with preference for local objects. Additionally, anchors guided allocentric coding of local objects, despite being task-irrelevant. Overall, anchors implicitly influence spatial coding of local object locations for memory-guided actions within naturalistic (virtual) environments.

摘要

与环境中的物体交互需要确定它们的位置,通常是相对于周围的物体(即,以自我为中心)。根据场景语法框架,这些通常较小、局部的物体在场景中是可移动的,并代表场景层次结构的最低级别。场景语法的更高层次如何影响记忆引导动作的以自我为中心的编码?在这里,我们专注于大而不可移动的物体(锚点)对局部物体位置编码的影响。在虚拟现实研究中,参与者(n=30)观看了四个可能场景中的一个(两个厨房或两个浴室),两个锚点通过架子连接,架子上呈现了三个局部物体(与一个锚点一致)(编码)。场景(Test)重新呈现,1)缺少局部物体,2)一个锚点移位(Shift)或不移位(No shift)。然后,参与者看到一个漂浮的局部物体(目标),他们抓住并将其放回架子上的记忆位置(Response)。眼动追踪数据显示,局部物体和锚点都被注视,并且偏好局部物体。此外,尽管锚点与任务无关,但它们指导了局部物体的以自我为中心的编码。总体而言,锚点在自然(虚拟)环境中对记忆引导动作的局部物体位置的空间编码产生了隐含的影响。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ff13/11226608/03446b555f4c/41598_2024_66428_Fig9_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ff13/11226608/0a21b35f9008/41598_2024_66428_Fig1_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ff13/11226608/7c22cc76d791/41598_2024_66428_Fig2_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ff13/11226608/a40032a09d0e/41598_2024_66428_Fig3_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ff13/11226608/5ce875f703b8/41598_2024_66428_Fig4_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ff13/11226608/476f152fa33d/41598_2024_66428_Fig5_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ff13/11226608/2f7cd84b814b/41598_2024_66428_Fig6_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ff13/11226608/ea9bce03a602/41598_2024_66428_Fig7_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ff13/11226608/429ce517c54e/41598_2024_66428_Fig8_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ff13/11226608/03446b555f4c/41598_2024_66428_Fig9_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ff13/11226608/0a21b35f9008/41598_2024_66428_Fig1_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ff13/11226608/7c22cc76d791/41598_2024_66428_Fig2_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ff13/11226608/a40032a09d0e/41598_2024_66428_Fig3_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ff13/11226608/5ce875f703b8/41598_2024_66428_Fig4_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ff13/11226608/476f152fa33d/41598_2024_66428_Fig5_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ff13/11226608/2f7cd84b814b/41598_2024_66428_Fig6_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ff13/11226608/ea9bce03a602/41598_2024_66428_Fig7_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ff13/11226608/429ce517c54e/41598_2024_66428_Fig8_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ff13/11226608/03446b555f4c/41598_2024_66428_Fig9_HTML.jpg

相似文献

1
Scene semantics affects allocentric spatial coding for action in naturalistic (virtual) environments.场景语义会影响自然(虚拟)环境中动作的以自我为中心的空间编码。
Sci Rep. 2024 Jul 5;14(1):15549. doi: 10.1038/s41598-024-66428-9.
2
Facilitation of allocentric coding by virtue of object-semantics.借助物体语义实现自我中心编码。
Sci Rep. 2019 Apr 18;9(1):6263. doi: 10.1038/s41598-019-42735-4.
3
Allocentric information is used for memory-guided reaching in depth: A virtual reality study.以自我为中心的信息用于深度记忆引导的伸手动作:一项虚拟现实研究。
Vision Res. 2016 Dec;129:13-24. doi: 10.1016/j.visres.2016.10.004. Epub 2016 Nov 1.
4
Scene Configuration and Object Reliability Affect the Use of Allocentric Information for Memory-Guided Reaching.场景配置和物体可靠性会影响以自我为中心的信息在记忆引导抓握中的使用。
Front Neurosci. 2017 Apr 13;11:204. doi: 10.3389/fnins.2017.00204. eCollection 2017.
5
Integration of egocentric and allocentric information during memory-guided reaching to images of a natural environment.在记忆引导下,对自然环境的图像进行到达操作时,自我中心和他心信息的整合。
Front Hum Neurosci. 2014 Aug 25;8:636. doi: 10.3389/fnhum.2014.00636. eCollection 2014.
6
The role of perception and action on the use of allocentric information in a large-scale virtual environment.在大规模虚拟环境中,感知与行动对使用异心信息的作用。
Exp Brain Res. 2020 Sep;238(9):1813-1826. doi: 10.1007/s00221-020-05839-2. Epub 2020 Jun 4.
7
Stuck on semantics: Processing of irrelevant object-scene inconsistencies modulates ongoing gaze behavior.纠结于语义:无关的物体-场景不一致性的处理会调节正在进行的注视行为。
Atten Percept Psychophys. 2017 Jan;79(1):154-168. doi: 10.3758/s13414-016-1203-7.
8
Contextual factors determine the use of allocentric information for reaching in a naturalistic scene.情境因素决定了在自然场景中伸手抓取时对以物体为中心信息的使用。
J Vis. 2015;15(13):24. doi: 10.1167/15.13.24.
9
Egocentric cues influence the allocentric spatial memory of object configurations for memory-guided actions.自我中心线索影响记忆引导动作的客体配置的定位空间记忆。
J Neurophysiol. 2023 Nov 1;130(5):1142-1149. doi: 10.1152/jn.00149.2023. Epub 2023 Oct 4.
10
Spatial updating of allocentric landmark information in real-time and memory-guided reaching.在实时和记忆引导的伸手动作中以自我为中心的地标信息的空间更新。
Cortex. 2020 Apr;125:203-214. doi: 10.1016/j.cortex.2019.12.010. Epub 2020 Jan 7.

本文引用的文献

1
Access to meaning from visual input: Object and word frequency effects in categorization behavior.视觉输入的意义获取:分类行为中的物体和单词频率效应。
J Exp Psychol Gen. 2023 Oct;152(10):2861-2881. doi: 10.1037/xge0001342. Epub 2023 May 8.
2
Eye Tracking in Virtual Reality: Vive Pro Eye Spatial Accuracy, Precision, and Calibration Reliability.虚拟现实中的眼动追踪:Vive Pro Eye的空间准确性、精度和校准可靠性
J Eye Mov Res. 2022 Sep 7;15(3). doi: 10.16910/jemr.15.3.3. eCollection 2022.
3
Dimensions underlying human understanding of the reachable world.
人类对可及世界的理解的维度。
Cognition. 2023 May;234:105368. doi: 10.1016/j.cognition.2023.105368. Epub 2023 Jan 13.
4
Auxiliary Scene-Context Information Provided by Anchor Objects Guides Attention and Locomotion in Natural Search Behavior.锚定物体提供的辅助场景上下文信息引导自然搜索行为中的注意力和运动。
Psychol Sci. 2022 Sep;33(9):1463-1476. doi: 10.1177/09567976221091838. Epub 2022 Aug 9.
5
Eye-Tracking for Clinical Ophthalmology with Virtual Reality (VR): A Case Study of the HTC Vive Pro Eye's Usability.虚拟现实(VR)在临床眼科中的眼动追踪:HTC Vive Pro Eye可用性的案例研究。
Healthcare (Basel). 2021 Feb 9;9(2):180. doi: 10.3390/healthcare9020180.
6
The meaning and structure of scenes.场景的意义和结构。
Vision Res. 2021 Apr;181:10-20. doi: 10.1016/j.visres.2020.11.003. Epub 2021 Jan 8.
7
Get Your Guidance Going: Investigating the Activation of Spatial Priors for Efficient Search in Virtual Reality.开启你的引导:探究空间先验的激活以在虚拟现实中进行高效搜索。
Brain Sci. 2021 Jan 4;11(1):44. doi: 10.3390/brainsci11010044.
8
Large-scale dissociations between views of objects, scenes, and reachable-scale environments in visual cortex.视觉皮层中物体、场景和可及尺度环境视图之间的大规模分离。
Proc Natl Acad Sci U S A. 2020 Nov 24;117(47):29354-29362. doi: 10.1073/pnas.1912333117.
9
Effects of Transient Loss of Vision on Head and Eye Movements during Visual Search in a Virtual Environment.虚拟环境中视觉搜索期间短暂视力丧失对头部和眼球运动的影响。
Brain Sci. 2020 Nov 12;10(11):841. doi: 10.3390/brainsci10110841.
10
Development and Calibration of an Eye-Tracking Fixation Identification Algorithm for Immersive Virtual Reality.沉浸式虚拟现实中眼动追踪注视点识别算法的开发与校准。
Sensors (Basel). 2020 Sep 1;20(17):4956. doi: 10.3390/s20174956.