Helen Wills Neuroscience Institute, University of California, Berkeley, Berkeley, CA 94720, USA.
Helen Wills Neuroscience Institute, University of California, Berkeley, Berkeley, CA 94720, USA; Department of Psychology, University of California, Berkeley, Berkeley, CA 94720, USA.
Neuron. 2019 Jan 2;101(1):178-192.e7. doi: 10.1016/j.neuron.2018.11.004. Epub 2018 Nov 26.
It has been argued that scene-selective areas in the human brain represent both the 3D structure of the local visual environment and low-level 2D features (such as spatial frequency) that provide cues for 3D structure. To evaluate the degree to which each of these hypotheses explains variance in scene-selective areas, we develop an encoding model of 3D scene structure and test it against a model of low-level 2D features. We fit the models to fMRI data recorded while subjects viewed visual scenes. The fit models reveal that scene-selective areas represent the distance to and orientation of large surfaces, at least partly independent of low-level features. Principal component analysis of the model weights reveals that the most important dimensions of 3D structure are distance and openness. Finally, reconstructions of the stimuli based on the model weights demonstrate that our model captures unprecedented detail about the local visual environment from scene-selective areas.
有人认为,人类大脑中的场景选择性区域既代表了局部视觉环境的 3D 结构,也代表了提供 3D 结构线索的低水平 2D 特征(如空间频率)。为了评估这两种假设在多大程度上解释了场景选择性区域的变化,我们开发了一种 3D 场景结构的编码模型,并将其与低水平 2D 特征模型进行了对比。我们根据受试者观看视觉场景时记录的 fMRI 数据来拟合模型。拟合模型表明,场景选择性区域至少部分独立于低水平特征来表示大表面的距离和方向。对模型权重的主成分分析表明,3D 结构最重要的维度是距离和开放性。最后,基于模型权重的刺激重建表明,我们的模型从场景选择性区域中捕获了关于局部视觉环境的前所未有的细节。