Centre for Mind/Brain Sciences, University of Trento, Italy.
Department of Psychology, Université de Montréal, Montréal, QC, Canada.
Neuroimage. 2021 Jul 15;235:118016. doi: 10.1016/j.neuroimage.2021.118016. Epub 2021 Apr 2.
When primates (both human and non-human) learn to categorize simple visual or acoustic stimuli by means of non-verbal matching tasks, two types of changes occur in their brain: early sensory cortices increase the precision with which they encode sensory information, and parietal and lateral prefrontal cortices develop a categorical response to the stimuli. Contrary to non-human animals, however, our species mostly constructs categories using linguistic labels. Moreover, we naturally tend to define categories by means of multiple sensory features of the stimuli. Here we trained adult subjects to parse a novel audiovisual stimulus space into 4 orthogonal categories, by associating each category to a specific symbol. We then used multi-voxel pattern analysis (MVPA) to show that during a cross-format category repetition detection task three neural representational changes were detectable. First, visual and acoustic cortices increased both precision and selectivity to their preferred sensory feature, displaying increased sensory segregation. Second, a frontoparietal network developed a multisensory object-specific response. Third, the right hippocampus and, at least to some extent, the left angular gyrus, developed a shared representational code common to symbols and objects. In particular, the right hippocampus displayed the highest level of abstraction and generalization from a format to the other, and also predicted symbolic categorization performance outside the scanner. Taken together, these results indicate that when humans categorize multisensory objects by means of language the set of changes occurring in the brain only partially overlaps with that described by classical models of non-verbal unisensory categorization in primates.
当灵长类动物(包括人类和非人类)通过非言语匹配任务学习对简单的视觉或听觉刺激进行分类时,它们的大脑会发生两种变化:早期感觉皮层提高了对感觉信息的编码精度,顶叶和外侧前额叶皮层对刺激产生了分类反应。然而,与非人类动物不同,我们人类主要使用语言标签来构建类别。此外,我们通常倾向于通过刺激的多个感觉特征来定义类别。在这里,我们通过将每个类别与特定符号相关联,训练成年受试者将一个新的视听刺激空间划分为 4 个正交类别。然后,我们使用多体素模式分析 (MVPA) 来表明,在跨格式类别重复检测任务中,可以检测到三种神经表示变化。首先,视觉和听觉皮层增加了对其首选感觉特征的精度和选择性,显示出增强的感觉分离。其次,一个额顶网络发展出了对多感觉特定对象的反应。第三,右海马体,至少在某种程度上,左角回,发展出了一种对符号和对象都通用的共享表示代码。特别是,右海马体表现出从一种格式到另一种格式的最高抽象和概括水平,并且在扫描仪外也可以预测符号分类性能。总之,这些结果表明,当人类通过语言对多感觉对象进行分类时,大脑中发生的一系列变化与经典的非言语单感觉分类模型在灵长类动物中描述的变化只有部分重叠。