Centre for Speech, Language, and the Brain, Department of Psychology, University of Cambridge, Cambridge, CB2 3EB, United Kingdom.
J Neurosci. 2013 Nov 27;33(48):18906-16. doi: 10.1523/JNEUROSCI.3809-13.2013.
Understanding the meanings of words and objects requires the activation of underlying conceptual representations. Semantic representations are often assumed to be coded such that meaning is evoked regardless of the input modality. However, the extent to which meaning is coded in modality-independent or amodal systems remains controversial. We address this issue in a human fMRI study investigating the neural processing of concepts, presented separately as written words and pictures. Activation maps for each individual word and picture were used as input for searchlight-based multivoxel pattern analyses. Representational similarity analysis was used to identify regions correlating with low-level visual models of the words and objects and the semantic category structure common to both. Common semantic category effects for both modalities were found in a left-lateralized network, including left posterior middle temporal gyrus (LpMTG), left angular gyrus, and left intraparietal sulcus (LIPS), in addition to object- and word-specific semantic processing in ventral temporal cortex and more anterior MTG, respectively. To explore differences in representational content across regions and modalities, we developed novel data-driven analyses, based on k-means clustering of searchlight dissimilarity matrices and seeded correlation analysis. These revealed subtle differences in the representations in semantic-sensitive regions, with representations in LIPS being relatively invariant to stimulus modality and representations in LpMTG being uncorrelated across modality. These results suggest that, although both LpMTG and LIPS are involved in semantic processing, only the functional role of LIPS is the same regardless of the visual input, whereas the functional role of LpMTG differs for words and objects.
理解单词和物体的含义需要激活潜在的概念表示。语义表示通常被认为是编码的,无论输入模态如何,都能唤起意义。然而,意义在多大程度上是以独立于模态或非模态系统编码的,这一问题仍存在争议。我们在一项人类 fMRI 研究中解决了这个问题,该研究调查了概念的神经处理,这些概念分别以书面单词和图片的形式呈现。为每个单独的单词和图片激活图被用作基于搜索灯的多体素模式分析的输入。代表性相似性分析用于识别与单词和物体的低水平视觉模型以及两者共有的语义类别结构相关的区域。在左侧偏侧网络中发现了两种模态的共同语义类别效应,包括左后颞中回(LpMTG)、左角回和左顶内沟(LIPS),以及腹侧颞叶皮层和更靠前的 MTG 分别进行的特定于物体和单词的语义处理。为了探索跨区域和模态的表示内容差异,我们开发了新的数据驱动分析方法,基于搜索灯相似度矩阵的 k-均值聚类和种子相关性分析。这些方法揭示了语义敏感区域表示的细微差异,其中 LIPS 的表示相对不受刺激模态的影响,而 LpMTG 的表示在模态之间没有相关性。这些结果表明,尽管 LpMTG 和 LIPS 都参与了语义处理,但只有 LIPS 的功能角色是相同的,而不管视觉输入如何,而 LpMTG 的功能角色在单词和物体之间是不同的。