Suppr超能文献

基于他心参照系的具身跨模态交互

Embodied Cross-Modal Interactions Based on an Altercentric Reference Frame.

作者信息

Guo Guanchen, Wang Nanbo, Sun Chu, Geng Haiyan

机构信息

School of Psychological and Cognitive Sciences, Beijing Key Laboratory of Behavior and Mental Health, Peking University, Beijing 100871, China.

Department of Psychology, School of Health, Fujian Medical University, Fuzhou 350122, China.

出版信息

Brain Sci. 2024 Mar 27;14(4):314. doi: 10.3390/brainsci14040314.

Abstract

Accurate comprehension of others' thoughts and intentions is crucial for smooth social interactions, wherein understanding their perceptual experiences serves as a fundamental basis for this high-level social cognition. However, previous research has predominantly focused on the visual modality when investigating perceptual processing from others' perspectives, leaving the exploration of multisensory inputs during this process largely unexplored. By incorporating auditory stimuli into visual perspective-taking (VPT) tasks, we have designed a novel experimental paradigm in which the spatial correspondence between visual and auditory stimuli was limited to the altercentric rather than the egocentric reference frame. Overall, we found that when individuals engaged in explicit or implicit VPT to process visual stimuli from an avatar's viewpoint, the concomitantly presented auditory stimuli were also processed within this avatar-centered reference frame, revealing altercentric cross-modal interactions.

摘要

准确理解他人的想法和意图对于顺利的社交互动至关重要,其中理解他们的感知体验是这种高级社会认知的基本基础。然而,以往的研究在从他人角度调查感知加工时主要集中在视觉模态上,在此过程中对多感官输入的探索在很大程度上尚未开展。通过将听觉刺激纳入视觉视角采择(VPT)任务,我们设计了一种新颖的实验范式,其中视觉和听觉刺激之间的空间对应仅限于以他人为中心而非以自我为中心的参照系。总体而言,我们发现,当个体进行明确或隐含的VPT以从虚拟化身的视角处理视觉刺激时,同时呈现的听觉刺激也在这个以虚拟化为中心的参照系内得到处理,揭示了以他人为中心的跨模态交互作用。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0791/11048532/ba35e2017b01/brainsci-14-00314-g001.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验