Mynick Anna, Steel Adam, Jayaraman Adithi, Botch Thomas L, Burrows Allie, Robertson Caroline E
Department of Psychological and Brain Sciences, Dartmouth College, 3 Maynard Street, Hanover, NH 03755, USA.
Department of Psychological and Brain Sciences, Dartmouth College, 3 Maynard Street, Hanover, NH 03755, USA.
Curr Biol. 2025 Jan 6;35(1):121-130.e6. doi: 10.1016/j.cub.2024.11.024. Epub 2024 Dec 17.
Each view of our environment captures only a subset of our immersive surroundings. Yet, our visual experience feels seamless. A puzzle for human neuroscience is to determine what cognitive mechanisms enable us to overcome our limited field of view and efficiently anticipate new views as we sample our visual surroundings. Here, we tested whether memory-based predictions of upcoming scene views facilitate efficient perceptual judgments across head turns. We tested this hypothesis using immersive, head-mounted virtual reality (VR). After learning a set of immersive real-world environments, participants (n = 101 across 4 experiments) were briefly primed with a single view from a studied environment and then turned left or right to make a perceptual judgment about an adjacent scene view. We found that participants' perceptual judgments were faster when they were primed with images from the same (vs. neutral or different) environments. Importantly, priming required memory: it only occurred in learned (vs. novel) environments, where the link between adjacent scene views was known. Further, consistent with a role in supporting active vision, priming only occurred in the direction of planned head turns and only benefited judgments for scene views presented in their learned spatiotopic positions. Taken together, we propose that memory-based predictions facilitate rapid perception across large-scale visual actions, such as head and body movements, and may be critical for efficient behavior in complex immersive environments.
我们对环境的每一次观察都只捕捉到了沉浸式周围环境的一个子集。然而,我们的视觉体验却感觉无缝衔接。人类神经科学面临的一个难题是确定哪些认知机制使我们能够克服有限的视野,并在我们对视觉环境进行采样时有效地预测新的视野。在这里,我们测试了基于记忆对即将出现的场景视图的预测是否有助于在头部转动过程中进行有效的感知判断。我们使用沉浸式头戴式虚拟现实(VR)对这一假设进行了测试。在学习了一组沉浸式现实世界环境后,参与者(4个实验中共有101人)被短暂地用来自所研究环境的单个视图进行启动,然后向左或向右转,对相邻的场景视图做出感知判断。我们发现,当参与者被来自相同(与中性或不同)环境的图像启动时,他们的感知判断更快。重要的是,启动需要记忆:它只发生在已学习(与新的)环境中,在这些环境中相邻场景视图之间的联系是已知的。此外,与支持主动视觉的作用一致,启动只发生在计划头部转动的方向上,并且只对呈现在其学习的空间位置上的场景视图的判断有帮助。综上所述,我们提出基于记忆的预测有助于在大规模视觉动作(如头部和身体运动)中快速感知,并且可能对复杂沉浸式环境中的高效行为至关重要。