Suppr超能文献

储存直立转弯:视觉和前庭线索在编码和回忆过程中的相互作用。

Storing upright turns: how visual and vestibular cues interact during the encoding and recalling process.

机构信息

Max Planck Institute for Biological Cybernetics, Tübingen, Germany.

出版信息

Exp Brain Res. 2010 Jan;200(1):37-49. doi: 10.1007/s00221-009-1980-5. Epub 2009 Aug 25.

Abstract

Many previous studies have focused on how humans combine inputs provided by different modalities for the same physical property. However, it is not yet very clear how different senses providing information about our own movements combine in order to provide a motion percept. We designed an experiment to investigate how upright turns are stored, and particularly how vestibular and visual cues interact at the different stages of the memorization process (encoding/recalling). Subjects experienced passive yaw turns stimulated in the vestibular modality (whole-body rotations) and/or in the visual modality (limited lifetime star-field rotations), with the visual scene turning 1.5 times faster when combined (unnoticed conflict). Then they were asked to actively reproduce the rotation displacement in the opposite direction, with body cues only, visual cues only, or both cues with either the same or a different gain factor. First, we found that in none of the conditions did the reproduced motion dynamics follow that of the presentation phase (Gaussian angular velocity profiles). Second, the unimodal recalling of turns was largely uninfluenced by the other sensory cue that it could be combined with during the encoding. Therefore, turns in each modality, visual, and vestibular are stored independently. Third, when the intersensory gain was preserved, the bimodal reproduction was more precise (reduced variance) and lay between the two unimodal reproductions. This suggests that with both visual and vestibular cues available, these combine in order to improve the reproduction. Fourth, when the intersensory gain was modified, the bimodal reproduction resulted in a substantially larger change for the body than for the visual scene rotations, which indicates that vision prevails for this rotation displacement task when a matching problem is introduced.

摘要

许多先前的研究都集中在人类如何将不同模态提供的输入组合在一起,以获得相同的物理属性。然而,目前还不是很清楚不同的感觉是如何结合起来提供关于我们自身运动的信息,从而提供运动感知的。我们设计了一个实验来研究身体的直立转弯是如何被储存的,特别是前庭和视觉线索在记忆过程的不同阶段(编码/回忆)是如何相互作用的。在实验中,被试经历了由前庭感觉(全身旋转)和/或视觉感觉(有限寿命的星场旋转)刺激的被动偏航转弯,当两种感觉结合时(未被察觉的冲突),视觉场景的旋转速度会快 1.5 倍。然后,他们被要求在相反的方向上主动再现旋转位移,只有身体线索、只有视觉线索或两者都有,增益因子相同或不同。首先,我们发现,在没有一种情况下,再现的运动动态遵循呈现阶段的情况(高斯角速度分布)。其次,在单模态回忆中,其他感觉线索在编码阶段与之结合的情况下,很大程度上不受影响。因此,每个模态的转弯,包括视觉和前庭感觉,都是独立储存的。第三,当感觉间增益被保留时,双模态再现更精确(方差减小),并介于两种单模态再现之间。这表明,当有视觉和前庭线索可用时,这些线索会结合起来以提高再现的精度。第四,当感觉间增益发生变化时,双模态再现导致身体的旋转变化明显大于视觉场景的旋转变化,这表明当引入匹配问题时,视觉在这种旋转位移任务中占主导地位。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a1e7/2800859/4ebcf53d32ef/221_2009_1980_Fig1_HTML.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验