Department of Psychology, The Center for the Neural Basis of Cognition, Carnegie Mellon University, Pittsburgh, Pennsylvania.
Brain Behav. 2019 Oct;9(10):e01373. doi: 10.1002/brb3.1373. Epub 2019 Sep 27.
How do multiple sources of information interact to form mental representations of object categories? It is commonly held that object categories reflect the integration of perceptual features and semantic/knowledge-based features. To explore the relative contributions of these two sources of information, we used functional magnetic resonance imaging (fMRI) to identify regions involved in the representation object categories with shared visual and/or semantic features.
Participants (N = 20) viewed a series of objects that varied in their degree of visual and semantic overlap in the MRI scanner. We used a blocked adaptation design to identify sensitivity to visual and semantic features in a priori visual processing regions and in a distributed network of object processing regions with an exploratory whole-brain analysis.
Somewhat surprisingly, within higher-order visual processing regions-specifically lateral occipital cortex (LOC)-we did not obtain any difference in neural adaptation for shared visual versus semantic category membership. More broadly, both visual and semantic information affected a distributed network of independently identified category-selective regions. Adaptation was seen a whole-brain network of processing regions in response to visual similarity and semantic similarity; specifically, the angular gyrus (AnG) adapted to visual similarity and the dorsomedial prefrontal cortex (DMPFC) adapted to both visual and semantic similarity.
Our findings suggest that perceptual features help organize mental categories throughout the object processing hierarchy. Most notably, visual similarity also influenced adaptation in nonvisual brain regions (i.e., AnG and DMPFC). We conclude that category-relevant visual features are maintained in higher-order conceptual representations and visual information plays an important role in both the acquisition and neural representation of conceptual object categories.
多个信息源如何相互作用形成对象类别的心理表象?人们普遍认为,对象类别反映了知觉特征和语义/基于知识特征的整合。为了探索这两种信息源的相对贡献,我们使用功能磁共振成像(fMRI)来识别涉及具有共同视觉和/或语义特征的对象类别表示的区域。
参与者(N=20)在磁共振成像扫描仪中观看了一系列在视觉和语义重叠程度上有所不同的物体。我们使用块适应设计来识别在前视觉处理区域和对象处理的分布式网络中对视觉和语义特征的敏感性,同时进行了探索性的全脑分析。
令人惊讶的是,在较高阶视觉处理区域(特别是外侧枕叶皮层(LOC))中,我们没有发现共同视觉与语义类别成员身份的神经适应差异。更广泛地说,视觉和语义信息都影响了独立识别的类别选择性区域的分布式网络。在响应视觉相似性和语义相似性时,我们在整个大脑网络的处理区域中观察到了适应;具体来说,角回(AnG)适应于视觉相似性,背内侧前额叶皮层(DMPFC)适应于视觉和语义相似性。
我们的研究结果表明,知觉特征有助于在整个对象处理层次结构中组织心理类别。最值得注意的是,视觉相似性也影响非视觉大脑区域(即 AnG 和 DMPFC)的适应。我们得出的结论是,与类别相关的视觉特征在高阶概念表示中得到保留,并且视觉信息在概念对象类别的获取和神经表示中都起着重要作用。