Atsma Jeroen, Maij Femke, Koppen Mathieu, Irwin David E, Medendorp W Pieter
Radboud University, Donders Institute for Brain, Cognition and Behaviour, Nijmegen, The Netherlands.
University of Illinois at Urbana-Champaign, Department of Psychology, Champaign, Illinois, United States of America.
PLoS Comput Biol. 2016 Mar 11;12(3):e1004766. doi: 10.1371/journal.pcbi.1004766. eCollection 2016 Mar.
Our ability to interact with the environment hinges on creating a stable visual world despite the continuous changes in retinal input. To achieve visual stability, the brain must distinguish the retinal image shifts caused by eye movements and shifts due to movements of the visual scene. This process appears not to be flawless: during saccades, we often fail to detect whether visual objects remain stable or move, which is called saccadic suppression of displacement (SSD). How does the brain evaluate the memorized information of the presaccadic scene and the actual visual feedback of the postsaccadic visual scene in the computations for visual stability? Using a SSD task, we test how participants localize the presaccadic position of the fixation target, the saccade target or a peripheral non-foveated target that was displaced parallel or orthogonal during a horizontal saccade, and subsequently viewed for three different durations. Results showed different localization errors of the three targets, depending on the viewing time of the postsaccadic stimulus and its spatial separation from the presaccadic location. We modeled the data through a Bayesian causal inference mechanism, in which at the trial level an optimal mixing of two possible strategies, integration vs. separation of the presaccadic memory and the postsaccadic sensory signals, is applied. Fits of this model generally outperformed other plausible decision strategies for producing SSD. Our findings suggest that humans exploit a Bayesian inference process with two causal structures to mediate visual stability.
尽管视网膜输入不断变化,但我们与环境互动的能力取决于创造一个稳定的视觉世界。为了实现视觉稳定,大脑必须区分由眼球运动引起的视网膜图像移动和由视觉场景运动引起的移动。这个过程似乎并非完美无缺:在扫视过程中,我们常常无法察觉视觉对象是保持稳定还是发生了移动,这被称为扫视位移抑制(SSD)。在视觉稳定的计算中,大脑是如何评估扫视前场景的记忆信息和扫视后视觉场景的实际视觉反馈的呢?我们使用SSD任务,测试参与者如何定位注视目标、扫视目标或在水平扫视过程中平行或垂直位移的周边非中央凹目标的扫视前位置,随后对其进行三种不同时长的观察。结果显示,这三个目标的定位误差各不相同,具体取决于扫视后刺激的观察时间及其与扫视前位置的空间距离。我们通过贝叶斯因果推理机制对数据进行建模,在试验层面应用了两种可能策略(扫视前记忆与扫视后感觉信号的整合与分离)的最优混合。该模型的拟合效果在产生SSD方面通常优于其他合理的决策策略。我们的研究结果表明,人类利用具有两种因果结构的贝叶斯推理过程来调节视觉稳定。