School of Medicine, Vanderbilt University, Nashville, Tennessee 37240
Vanderbilt Brain Institute, Vanderbilt University, Nashville, Tennessee 37240.
J Neurosci. 2020 Jul 15;40(29):5604-5615. doi: 10.1523/JNEUROSCI.2139-19.2020. Epub 2020 Jun 4.
Objects are the fundamental building blocks of how we create a representation of the external world. One major distinction among objects is between those that are animate versus those that are inanimate. In addition, many objects are specified by more than a single sense, yet the nature by which multisensory objects are represented by the brain remains poorly understood. Using representational similarity analysis of male and female human EEG signals, we show enhanced encoding of audiovisual objects when compared with their corresponding visual and auditory objects. Surprisingly, we discovered that the often-found processing advantages for animate objects were not evident under multisensory conditions. This was due to a greater neural enhancement of inanimate objects-which are more weakly encoded under unisensory conditions. Further analysis showed that the selective enhancement of inanimate audiovisual objects corresponded with an increase in shared representations across brain areas, suggesting that the enhancement was mediated by multisensory integration. Moreover, a distance-to-bound analysis provided critical links between neural findings and behavior. Improvements in neural decoding at the individual exemplar level for audiovisual inanimate objects predicted reaction time differences between multisensory and unisensory presentations during a Go/No-Go animate categorization task. Links between neural activity and behavioral measures were most evident at intervals of 100-200 ms and 350-500 ms after stimulus presentation, corresponding to time periods associated with sensory evidence accumulation and decision-making, respectively. Collectively, these findings provide key insights into a fundamental process the brain uses to maximize the information it captures across sensory systems to perform object recognition. Our world is filled with ever-changing sensory information that we are able to seamlessly transform into a coherent and meaningful perceptual experience. We accomplish this feat by combining different stimulus features into objects. However, despite the fact that these features span multiple senses, little is known about how the brain combines the various forms of sensory information into object representations. Here, we used EEG and machine learning to study how the brain processes auditory, visual, and audiovisual objects. Surprisingly, we found that nonliving (i.e., inanimate) objects, which are more difficult to process with one sense alone, benefited the most from engaging multiple senses.
对象是我们构建外部世界表示的基本构建块。对象的一个主要区别是有生命的与无生命的。此外,许多对象不仅由一种感觉来指定,而且大脑对多感觉对象的表示方式仍知之甚少。通过对男性和女性人类 EEG 信号的表示相似性分析,我们表明与相应的视觉和听觉对象相比,视听对象的编码得到了增强。令人惊讶的是,我们发现对于有生命的物体的常见处理优势在多感觉条件下并不明显。这是由于无生命物体的神经增强更大-在单感觉条件下编码较弱。进一步的分析表明,对无生命视听对象的选择性增强与大脑区域之间共享表示的增加相对应,这表明增强是通过多感觉整合介导的。此外,距离到边界的分析为神经发现和行为之间提供了关键联系。视听无生命物体的个体范例水平的神经解码的改善预测了在进行 Go/No-Go 有生命分类任务时,多感觉和单感觉呈现之间的反应时间差异。在刺激呈现后 100-200 毫秒和 350-500 毫秒之间,神经活动与行为测量之间的联系最为明显,分别对应于与感觉证据积累和决策相关的时间段。总的来说,这些发现为大脑用于在不同感觉系统中最大化捕获信息以执行对象识别的基本过程提供了关键见解。我们的世界充满了不断变化的感觉信息,我们能够将其无缝转换为连贯而有意义的感知体验。我们通过将不同的刺激特征组合成对象来实现这一壮举。然而,尽管这些特征跨越多种感觉,但大脑如何将各种形式的感觉信息组合成对象表示形式知之甚少。在这里,我们使用 EEG 和机器学习来研究大脑如何处理听觉、视觉和视听对象。令人惊讶的是,我们发现非生命(即无生命)物体,单独用一种感觉处理起来更困难,从多感觉参与中受益最大。