Li Shi Pui Donald, Shao Jiayu, Lu Zhengang, McCloskey Michael, Park Soojin
Department of Cognitive Science, Johns Hopkins University, Baltimore, MD, USA.
laboratory of Brain and Cognition, National Institute of Mental Health, Bethesda, MD, USA.
bioRxiv. 2024 Jul 5:2024.07.03.601933. doi: 10.1101/2024.07.03.601933.
Human navigation heavily relies on visual information. Although many previous studies have investigated how navigational information is inferred from visual features of scenes, little is understood about the impact of navigational experience on visual scene representation. In this study, we examined how navigational experience influences both the behavioral and neural responses to a visual scene. During training, participants navigated in the virtual reality (VR) environments which we manipulated navigational experience while holding the visual properties of scenes constant. Half of the environments allowed free navigation (navigable), while the other half featured an 'invisible wall' preventing the participants to continue forward even though the scene was visually navigable (non-navigable). During testing, participants viewed scene images from the VR environment while completing either a behavioral perceptual identification task (Experimentl) or an fMRI scan (Experiment2). Behaviorally, we found that participants judged a scene pair to be significantly more visually different if their prior navigational experience varied, even after accounting for visual similarities between the scene pairs. Neurally, multi-voxel pattern of the parahippocampal place area (PPA) distinguished visual scenes based on prior navigational experience alone. These results suggest that the human visual scene cortex represents information about navigability obtained through prior experience, beyond those computable from the visual properties of the scene. Taken together, these results suggest that scene representation is modulated by prior navigational experience to help us construct a functionally meaningful visual environment.
人类导航严重依赖视觉信息。尽管之前有许多研究探讨了如何从场景的视觉特征中推断导航信息,但对于导航经验对视觉场景表征的影响却知之甚少。在本研究中,我们考察了导航经验如何影响对视觉场景的行为和神经反应。在训练过程中,参与者在虚拟现实(VR)环境中导航,我们在保持场景视觉属性不变的同时操纵导航经验。一半的环境允许自由导航(可导航),而另一半则设有“隐形墙”,即使场景在视觉上是可导航的,也会阻止参与者继续前进(不可导航)。在测试过程中,参与者在完成行为感知识别任务(实验1)或功能磁共振成像扫描(实验2)时观看来自VR环境的场景图像。在行为方面,我们发现,即使在考虑了场景对之间的视觉相似性之后,如果参与者先前的导航经验不同,他们判断一对场景在视觉上的差异会显著更大。在神经方面,海马旁回位置区(PPA)的多体素模式仅根据先前的导航经验就能区分视觉场景。这些结果表明,人类视觉场景皮层除了表征从场景视觉属性中可计算出的信息外,还表征通过先前经验获得的关于可导航性的信息。综上所述,这些结果表明,场景表征受到先前导航经验的调节,以帮助我们构建一个功能上有意义的视觉环境。