van Beers R J, Sittig A C, Denier van der Gon J J
Department of Industrial Design Engineering, Delft University of Technology, The Netherlands.
Exp Brain Res. 1996 Sep;111(2):253-61. doi: 10.1007/BF00227302.
To enable us to study how humans combine simultaneously present visual and proprioceptive position information, we had subjects perform a matching task. Seated at a table, they placed their left hand under the table concealing it from their gaze. They then had to match the proprioceptively perceived position of the left hand using only proprioceptive, only visual or both proprioceptive and visual information. We analysed the variance of the indicated positions in the various conditions. We compared the results with the predictions of a model in which simultaneously present visual and proprioceptive position information about the same object is integrated in the most effective way. The results are in disagreement with the model: the variance of the condition with both visual and proprioceptive information is smaller than expected from the variances of the other conditions. This means that the available information was integrated in a highly effective way. Furthermore, the results suggest that additional information was used. This information might have been visual information about body parts other than the fingertip or it might have been visual information about the environment.
为了能够研究人类如何同时整合当前呈现的视觉和本体感觉位置信息,我们让受试者执行一项匹配任务。他们坐在桌前,将左手放在桌子下面,使其处于视线之外。然后,他们必须仅使用本体感觉、仅使用视觉或同时使用本体感觉和视觉信息来匹配左手的本体感觉位置。我们分析了各种条件下指示位置的方差。我们将结果与一个模型的预测进行了比较,在该模型中,关于同一物体的同时呈现的视觉和本体感觉位置信息以最有效的方式进行整合。结果与该模型不一致:同时具有视觉和本体感觉信息的条件下的方差小于其他条件下方差的预期值。这意味着可用信息以非常有效的方式进行了整合。此外,结果表明使用了额外的信息。这些信息可能是关于指尖以外身体部位的视觉信息,也可能是关于环境的视觉信息。