Institut des Sciences Cognitives Marc Jeannerod, UMR5229 CNRS Université de Lyon, Bron Cedex, France.
Risk-Eraser, West Falmouth, Massachusetts, USA.
Eur J Neurosci. 2024 Jun;59(12):3203-3223. doi: 10.1111/ejn.16328. Epub 2024 Apr 18.
Social communication draws on several cognitive functions such as perception, emotion recognition and attention. The association of audio-visual information is essential to the processing of species-specific communication signals. In this study, we use functional magnetic resonance imaging in order to identify the subcortical areas involved in the cross-modal association of visual and auditory information based on their common social meaning. We identified three subcortical regions involved in audio-visual processing of species-specific communicative signals: the dorsolateral amygdala, the claustrum and the pulvinar. These regions responded to visual, auditory congruent and audio-visual stimulations. However, none of them was significantly activated when the auditory stimuli were semantically incongruent with the visual context, thus showing an influence of visual context on auditory processing. For example, positive vocalization (coos) activated the three subcortical regions when presented in the context of positive facial expression (lipsmacks) but not when presented in the context of negative facial expression (aggressive faces). In addition, the medial pulvinar and the amygdala presented multisensory integration such that audiovisual stimuli resulted in activations that were significantly higher than those observed for the highest unimodal response. Last, the pulvinar responded in a task-dependent manner, along a specific spatial sensory gradient. We propose that the dorsolateral amygdala, the claustrum and the pulvinar belong to a multisensory network that modulates the perception of visual socioemotional information and vocalizations as a function of the relevance of the stimuli in the social context. SIGNIFICANCE STATEMENT: Understanding and correctly associating socioemotional information across sensory modalities, such that happy faces predict laughter and escape scenes predict screams, is essential when living in complex social groups. With the use of functional magnetic imaging in the awake macaque, we identify three subcortical structures-dorsolateral amygdala, claustrum and pulvinar-that only respond to auditory information that matches the ongoing visual socioemotional context, such as hearing positively valenced coo calls and seeing positively valenced mutual grooming monkeys. We additionally describe task-dependent activations in the pulvinar, organizing along a specific spatial sensory gradient, supporting its role as a network regulator.
社会交流依赖于多种认知功能,如感知、情绪识别和注意力。视听信息的关联对于处理特定物种的交流信号至关重要。在这项研究中,我们使用功能磁共振成像来识别涉及视觉和听觉信息的跨模态关联的皮质下区域,基于它们共同的社会意义。我们确定了三个涉及特定物种交际信号的视听处理的皮质下区域:背外侧杏仁核、屏状核和丘脑枕。这些区域对视觉、听觉一致和视听刺激有反应。然而,当听觉刺激与视觉背景在语义上不一致时,它们都没有被显著激活,因此表现出视觉背景对听觉处理的影响。例如,当积极的发声(咕咕声)在积极的面部表情(嘴唇拍打)的背景下呈现时,三个皮质下区域都会被激活,但在消极的面部表情(攻击性面孔)的背景下则不会。此外,内侧丘脑枕和杏仁核呈现出多感觉整合,使得视听刺激的激活显著高于最高单模态反应的激活。最后,丘脑枕以任务依赖的方式响应,沿着特定的空间感觉梯度。我们提出,背外侧杏仁核、屏状核和丘脑枕属于一个多感觉网络,该网络调节视觉社会情感信息和发声的感知,作为刺激在社会背景中的相关性的函数。
理解和正确关联跨感觉模态的社会情感信息,例如,快乐面孔预测笑声,逃避场景预测尖叫声,对于生活在复杂的社会群体中是至关重要的。使用清醒猕猴的功能磁共振成像,我们确定了三个皮质下结构——背外侧杏仁核、屏状核和丘脑枕——它们只对与正在进行的视觉社会情感背景相匹配的听觉信息做出反应,例如听到积极的咕咕叫声和看到积极的相互梳理的猴子。我们还描述了丘脑枕的任务相关激活,沿着特定的空间感觉梯度组织,支持其作为网络调节剂的作用。