Department of Psychology, Yonsei University, 50, Yonsei-ro, Seodaemun-gu, Seoul 03722, the Republic of Korea.
Department of Psychology, Yonsei University, 50, Yonsei-ro, Seodaemun-gu, Seoul 03722, the Republic of Korea.
Behav Brain Res. 2024 Aug 5;471:115110. doi: 10.1016/j.bbr.2024.115110. Epub 2024 Jun 11.
Visual features of separable dimensions conjoin to represent an integrated entity. We investigated how visual features bind to form a complex visual scene. Specifically, we focused on features important for visually guided navigation: direction and distance. Previously, separate works have shown that directions and distances of navigable paths are coded in the occipital place area (OPA). Using functional magnetic resonance imaging (fMRI), we tested how separate features are concurrently represented in the OPA. Participants saw eight types of scenes, in which four of them had one path and the other four had two paths. In single-path scenes, path direction was either to the left or to the right. In double-path scenes, both directions were present. A glass wall was placed in some paths to restrict navigational distance. To test how the OPA represents path directions and distances, we took three approaches. First, the independent-features approach examined whether the OPA codes each direction and distance. Second, the integrated-features approach explored how directions and distances are integrated into path units, as compared to pooled features, using double-path scenes. Finally, the integrated-paths approach asked how separate paths are combined into a scene. Using multi-voxel pattern similarity analysis, we found that the OPA's representations of single-path scenes were similar to other single-path scenes of either the same direction or the same distance. Representations of double-path scenes were similar to the combination of two constituent single-paths, as a combined unit of direction and distance rather than as a pooled representation of all features. These results show that the OPA combines the two features to form path units, which are then used to build multiple-path scenes. Altogether, these results suggest that visually guided navigation may be supported by the OPA that automatically and efficiently combines multiple features relevant for navigation and represent a navigation file.
可分离维度的视觉特征结合在一起表示一个完整的实体。我们研究了视觉特征如何结合形成复杂的视觉场景。具体来说,我们专注于对视觉引导导航重要的特征:方向和距离。以前,单独的研究表明可导航路径的方向和距离编码在枕叶位置区域(OPA)中。使用功能磁共振成像(fMRI),我们测试了 OPA 中如何同时表示单独的特征。参与者观看了八种类型的场景,其中四种场景只有一条路径,另外四种场景有两条路径。在单路径场景中,路径方向要么向左,要么向右。在双路径场景中,两种方向都存在。一些路径中放置了一堵玻璃墙来限制可导航距离。为了测试 OPA 如何表示路径方向和距离,我们采用了三种方法。首先,独立特征方法检查 OPA 是否对每个方向和距离进行编码。其次,与整体特征相比,集成特征方法在双路径场景中探索了方向和距离如何集成到路径单元中。最后,集成路径方法询问单独的路径如何组合成一个场景。使用多体素模式相似性分析,我们发现 OPA 对单路径场景的表示与其他相同方向或相同距离的单路径场景相似。双路径场景的表示与两个组成单路径的组合相似,作为方向和距离的组合单元,而不是作为所有特征的整体表示。这些结果表明,OPA 将两个特征结合起来形成路径单元,然后这些单元用于构建多路径场景。总之,这些结果表明,视觉引导导航可能由 OPA 支持,OPA 自动且高效地组合了与导航相关的多个特征,并表示了一个导航文件。