Ostendorf Florian, Dolan Raymond J
Berlin School of Mind and Brain, Humboldt-Universität zu Berlin, Berlin, Germany; Dept. of Neurology, Charité-Universitätsmedizin Berlin, Berlin, Germany.
Berlin School of Mind and Brain, Humboldt-Universität zu Berlin, Berlin, Germany; Wellcome Trust Centre for Neuroimaging, University College London, London, United Kingdom; Max Planck UCL Centre for Computational Psychiatry and Ageing Research, Russell Square House, London, United Kingdom.
PLoS One. 2015 Jan 20;10(1):e0116810. doi: 10.1371/journal.pone.0116810. eCollection 2015.
Visual perception is burdened with a highly discontinuous input stream arising from saccadic eye movements. For successful integration into a coherent representation, the visuomotor system needs to deal with these self-induced perceptual changes and distinguish them from external motion. Forward models are one way to solve this problem where the brain uses internal monitoring signals associated with oculomotor commands to predict the visual consequences of corresponding eye movements during active exploration. Visual scenes typically contain a rich structure of spatial relational information, providing additional cues that may help disambiguate self-induced from external changes of perceptual input. We reasoned that a weighted integration of these two inherently noisy sources of information should lead to better perceptual estimates. Volunteer subjects performed a simple perceptual decision on the apparent displacement of a visual target, jumping unpredictably in sync with a saccadic eye movement. In a critical test condition, the target was presented together with a flanker object, where perceptual decisions could take into account the spatial distance between target and flanker object. Here, precision was better compared to control conditions in which target displacements could only be estimated from either extraretinal or visual relational information alone. Our findings suggest that under natural conditions, integration of visual space across eye movements is based upon close to optimal integration of both retinal and extraretinal pieces of information.
视觉感知因眼球快速跳动所产生的高度不连续输入流而负担重重。为了成功整合为连贯的表征,视觉运动系统需要处理这些由自身引起的感知变化,并将它们与外部运动区分开来。前向模型是解决此问题的一种方法,大脑利用与眼球运动命令相关的内部监测信号来预测主动探索过程中相应眼球运动的视觉后果。视觉场景通常包含丰富的空间关系信息结构,提供了额外的线索,可能有助于区分由自身引起的感知输入变化和外部引起的感知输入变化。我们推断,对这两个本质上有噪声的信息源进行加权整合应该会带来更好的感知估计。志愿者受试者对视觉目标的明显位移进行简单的感知决策,该目标会与眼球快速跳动同步不可预测地跳跃。在一个关键测试条件下,目标与一个侧翼物体一起呈现,此时感知决策可以考虑目标与侧翼物体之间的空间距离。在此,与仅能从视网膜外信息或视觉关系信息单独估计目标位移的控制条件相比,精度更高。我们的研究结果表明,在自然条件下,跨眼球运动的视觉空间整合是基于视网膜和视网膜外信息的近乎最优整合。