Suppr超能文献

场景选择性皮层中视觉特征的联合表征。

Combined representation of visual features in the scene-selective cortex.

作者信息

Kang Jisu, Park Soojin

机构信息

Department of Psychology, Yonsei University, 50, Yonsei-ro, Seodaemun-gu, Seoul, 03722, Republic of Korea.

出版信息

bioRxiv. 2023 Jul 26:2023.07.24.550280. doi: 10.1101/2023.07.24.550280.

Abstract

Visual features of separable dimensions like color and shape conjoin to represent an integrated entity. We investigated how visual features bind to form a complex visual scene. Specifically, we focused on features important for visually guided navigation: direction and distance. Previously, separate works have shown that directions and distances of navigable paths are coded in the occipital place area (OPA). Using functional magnetic resonance imaging (fMRI), we tested how separate features are concurrently represented in the OPA. Participants saw eight different types of scenes, in which four of them had one path and the other four had two paths. In single-path scenes, path direction was either to the left or to the right. In double-path scenes, both directions were present. Each path contained a glass wall located either near or far, changing the navigational distance. To test how the OPA represents paths in terms of direction and distance features, we took three approaches. First, the independent-features approach examined whether the OPA codes directions and distances independently in single-path scenes. Second, the integrated-features approach explored how directions and distances are integrated into path units, as compared to pooled features, using double-path scenes. Finally, the integrated-paths approach asked how separate paths are combined into a scene. Using multi-voxel pattern similarity analysis, we found that the OPA's representations of single-path scenes were similar to other single-path scenes of either the same direction or the same distance. Representations of double-path scenes were similar to the combination of two constituent single-paths, as a combined unit of direction and distance rather than pooled representation of all features. These results show that the OPA combines the two features to form path units, which are then used to build multiple-path scenes. Altogether, these results suggest that visually guided navigation may be supported by the OPA that automatically and efficiently combines multiple features relevant for navigation and represent a .

摘要

像颜色和形状这样可分离维度的视觉特征结合起来代表一个完整的实体。我们研究了视觉特征如何结合形成一个复杂的视觉场景。具体来说,我们关注对视觉引导导航很重要的特征:方向和距离。此前,已有单独的研究表明可导航路径的方向和距离在枕叶位置区(OPA)进行编码。我们使用功能磁共振成像(fMRI)测试了OPA中如何同时呈现不同的特征。参与者观看了八种不同类型的场景,其中四种有一条路径,另外四种有两条路径。在单路径场景中,路径方向要么向左要么向右。在双路径场景中,两种方向都存在。每条路径都有一面玻璃墙,其位置要么近要么远,从而改变导航距离。为了测试OPA如何根据方向和距离特征来呈现路径,我们采用了三种方法。首先,独立特征方法检验了OPA在单路径场景中是否独立编码方向和距离。其次,综合特征方法探索了与汇总特征相比,在双路径场景中方向和距离是如何整合到路径单元中的。最后,综合路径方法研究了不同路径是如何组合成一个场景的。使用多体素模式相似性分析,我们发现OPA对单路径场景的表征与相同方向或相同距离的其他单路径场景相似。双路径场景的表征与两个组成单路径的组合相似,是作为方向和距离的组合单元,而不是所有特征的汇总表征。这些结果表明,OPA将这两个特征组合形成路径单元,然后用于构建多路径场景。总之,这些结果表明,视觉引导导航可能由OPA支持,OPA自动且高效地组合与导航相关的多个特征并进行表征。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3d40/10402097/a43bfb55ecba/nihpp-2023.07.24.550280v1-f0001.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验