Suppr超能文献

测试视觉和本体感觉路径轨迹信息最佳整合的极限。

Testing the limits of optimal integration of visual and proprioceptive information of path trajectory.

机构信息

Department of Experimental and Biological Psychology, Philipps-University Marburg, Marburg, Germany.

出版信息

Exp Brain Res. 2011 Apr;209(4):619-30. doi: 10.1007/s00221-011-2596-0. Epub 2011 Feb 24.

Abstract

Many studies provide evidence that information from different modalities is integrated following the maximum likelihood estimation model (MLE). For instance, we recently found that visual and proprioceptive path trajectories are optimally combined (Reuschel et al. in Exp Brain Res 201:853-862, 2010). However, other studies have failed to reveal optimal integration of such dynamic information. In the present study, we aim to generalize our previous findings to different parts of the workspace (central, ipsilateral, or contralateral) and to different types of judgments (relative vs. absolute). Participants made relative judgments by judging whether an angular path was acute or obtuse, or they made absolute judgments by judging whether a one-segmented straight path was directed to left or right. Trajectories were presented in the visual, proprioceptive, or combined visual-proprioceptive modality. We measured the bias and the variance of these estimates and predicted both parameters using the MLE. In accordance with the MLE model, participants linearly combined and weighted the unimodal angular path information by their reliabilities irrespective of the side of workspace. However, the precision of bimodal estimates was not greater than that for unimodal estimates, which is inconsistent with the MLE. For the absolute judgment task, participants' estimates were highly accurate and did not differ across modalities. Thus, we were unable to test whether the bimodal percept resulted as a weighted average of the visual and proprioceptive input. Additionally, participants were not more precise in the bimodal compared with the unimodal conditions, which is inconsistent with the MLE. Current findings suggest that optimal integration of visual and proprioceptive information of path trajectory only applies in some conditions.

摘要

许多研究提供了证据,表明不同模式的信息是根据最大似然估计模型(MLE)进行整合的。例如,我们最近发现,视觉和本体感觉路径轨迹是最佳组合的(Reuschel 等人,在 Exp Brain Res 201:853-862, 2010)。然而,其他研究未能揭示这种动态信息的最佳整合。在本研究中,我们旨在将我们以前的发现推广到工作空间的不同部分(中央、同侧或对侧)和不同类型的判断(相对与绝对)。参与者通过判断角度路径是锐角还是钝角来进行相对判断,或者通过判断一段直线路径是指向左侧还是右侧来进行绝对判断。轨迹以视觉、本体感觉或视觉-本体感觉结合的方式呈现。我们测量了这些估计的偏差和方差,并使用 MLE 预测了这两个参数。根据 MLE 模型,参与者线性地组合并根据可靠性对单模态角度路径信息进行加权,而不考虑工作空间的侧面。然而,双模态估计的精度并不大于单模态估计的精度,这与 MLE 不一致。对于绝对判断任务,参与者的估计非常准确,并且在不同模态之间没有差异。因此,我们无法测试双模态感知是否是视觉和本体感觉输入的加权平均值。此外,与单模态条件相比,参与者在双模态条件下并不更精确,这与 MLE 不一致。目前的研究结果表明,路径轨迹的视觉和本体感觉信息的最佳整合仅适用于某些情况。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验