Suppr超能文献

自运动导致跨感觉模式的强制性线索融合。

Self-motion leads to mandatory cue fusion across sensory modalities.

机构信息

Laboratory of Cognitive Neuroscience, Brain Mind Institute, Ecole Polytechnique Fédérale de Lausanne, Lausanne, Switzerland.

出版信息

J Neurophysiol. 2012 Oct;108(8):2282-91. doi: 10.1152/jn.00439.2012. Epub 2012 Jul 25.

Abstract

When perceiving properties of the world, we effortlessly combine multiple sensory cues into optimal estimates. Estimates derived from the individual cues are generally retained once the multisensory estimate is produced and discarded only if the cues stem from the same sensory modality (i.e., mandatory fusion). Does multisensory integration differ in that respect when the object of perception is one's own body, rather than an external variable? We quantified how humans combine visual and vestibular information for perceiving own-body rotations and specifically tested whether such idiothetic cues are subjected to mandatory fusion. Participants made extensive size comparisons between successive whole body rotations using only visual, only vestibular, and both senses together. Probabilistic descriptions of the subjects' perceptual estimates were compared with a Bayes-optimal integration model. Similarity between model predictions and experimental data echoed a statistically optimal mechanism of multisensory integration. Most importantly, size discrimination data for rotations composed of both stimuli was best accounted for by a model in which only the bimodal estimator is accessible for perceptual judgments as opposed to an independent or additive use of all three estimators (visual, vestibular, and bimodal). Indeed, subjects' thresholds for detecting two multisensory rotations as different from one another were, in pertinent cases, larger than those measured using either single-cue estimate alone. Rotations different in terms of the individual visual and vestibular inputs but quasi-identical in terms of the integrated bimodal estimate became perceptual metamers. This reveals an exceptional case of mandatory fusion of cues stemming from two different sensory modalities.

摘要

当感知世界的属性时,我们毫不费力地将多种感官线索组合成最佳估计。一旦产生多感官估计,就会保留来自单个线索的估计,只有当线索来自同一感觉模态时才会丢弃(即强制性融合)。当感知的对象是自己的身体而不是外部变量时,多感官整合在这方面是否有所不同?我们量化了人类如何结合视觉和前庭信息来感知自身身体的旋转,并特别测试了这种内感受线索是否受到强制性融合的影响。参与者使用仅视觉、仅前庭和两者同时进行了大量连续全身旋转的大小比较。对主体感知估计的概率描述与贝叶斯最优整合模型进行了比较。模型预测与实验数据之间的相似性反映了多感官整合的统计最优机制。最重要的是,对于由两种刺激组成的旋转,大小判别数据最好由一个模型解释,其中只有双模态估计器可用于感知判断,而不是独立或附加使用所有三个估计器(视觉、前庭和双模态)。事实上,与使用单个单线索估计值相比,检测两个多感官旋转彼此不同的受试者的阈值更大。在某些情况下,在个体视觉和前庭输入不同但整体双模态估计几乎相同的情况下,旋转成为感知同形物。这揭示了来自两个不同感觉模态的线索强制性融合的特殊情况。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验