Glennerster Andrew, Tcheang Lili, Gilson Stuart J, Fitzgibbon Andrew W, Parker Andrew J
Department of Physiology, Anatomy and Genetics, Sherrington Building, University of Oxford, Parks Road, Oxford OX1 3PT, United Kingdom.
Curr Biol. 2006 Feb 21;16(4):428-32. doi: 10.1016/j.cub.2006.01.019.
As we move through the world, our eyes acquire a sequence of images. The information from this sequence is sufficient to determine the structure of a three-dimensional scene, up to a scale factor determined by the distance that the eyes have moved. Previous evidence shows that the human visual system accounts for the distance the observer has walked and the separation of the eyes when judging the scale, shape, and distance of objects. However, in an immersive virtual-reality environment, observers failed to notice when a scene expanded or contracted, despite having consistent information about scale from both distance walked and binocular vision. This failure led to large errors in judging the size of objects. The pattern of errors cannot be explained by assuming a visual reconstruction of the scene with an incorrect estimate of interocular separation or distance walked. Instead, it is consistent with a Bayesian model of cue integration in which the efficacy of motion and disparity cues is greater at near viewing distances. Our results imply that observers are more willing to adjust their estimate of interocular separation or distance walked than to accept that the scene has changed in size.
当我们在周围环境中移动时,眼睛会获取一系列图像。来自该序列的信息足以确定三维场景的结构,其比例因子由眼睛移动的距离决定。先前的证据表明,人类视觉系统在判断物体的比例、形状和距离时,会考虑观察者行走的距离以及双眼间距。然而,在沉浸式虚拟现实环境中,尽管从行走距离和双眼视觉中都有关于比例的一致信息,但观察者在场景扩大或缩小时却未能察觉。这种疏忽导致在判断物体大小时出现了很大的误差。这些误差模式无法通过假设对场景进行视觉重建但对眼间距或行走距离估计错误来解释。相反,这与一种贝叶斯线索整合模型一致,在该模型中,运动和视差线索在近观察距离时效果更佳。我们的结果表明,观察者更愿意调整他们对眼间距或行走距离的估计,而不是接受场景大小发生了变化。