Department of Clinical Neuroscience, University Hospital of Geneva, Switzerland; Center for Affective Sciences, University of Geneva, Switzerland; Campus Biotech, Geneva, Switzerland.
Facultad de Psicología, Universidad Autónoma de Madrid, Spain.
Neuropsychologia. 2019 Aug;131:9-24. doi: 10.1016/j.neuropsychologia.2019.05.027. Epub 2019 May 31.
The amygdala is crucially implicated in processing emotional information from various sensory modalities. However, there is dearth of knowledge concerning the integration and relative time-course of its responses across different channels, i.e., for auditory, visual, and audiovisual input. Functional neuroimaging data in humans point to a possible role of this region in the multimodal integration of emotional signals, but direct evidence for anatomical and temporal overlap of unisensory and multisensory-evoked responses in amygdala is still lacking. We recorded event-related potentials (ERPs) and oscillatory activity from 9 amygdalae using intracranial electroencephalography (iEEG) in patients prior to epilepsy surgery, and compared electrophysiological responses to fearful, happy, or neutral stimuli presented either in voices alone, faces alone, or voices and faces simultaneously delivered. Results showed differential amygdala responses to fearful stimuli, in comparison to neutral, reaching significance 100-200 ms post-onset for auditory, visual and audiovisual stimuli. At later latencies, ∼400 ms post-onset, amygdala response to audiovisual information was also amplified in comparison to auditory or visual stimuli alone. Importantly, however, we found no evidence for either super- or subadditivity effects in any of the bimodal responses. These results suggest, first, that emotion processing in amygdala occurs at globally similar early stages of perceptual processing for auditory, visual, and audiovisual inputs; second, that overall larger responses to multisensory information occur at later stages only; and third, that the underlying mechanisms of this multisensory gain may reflect a purely additive response to concomitant visual and auditory inputs. Our findings provide novel insights on emotion processing across the sensory pathways, and their convergence within the limbic system.
杏仁核在处理来自各种感觉模式的情感信息方面起着至关重要的作用。然而,对于其在不同通道(即听觉、视觉和视听输入)中的整合和相对时程知之甚少。人类的功能神经影像学数据表明,该区域可能在情绪信号的多模态整合中发挥作用,但在杏仁核中,单感觉和多感觉诱发反应的解剖和时间重叠的直接证据仍然缺乏。我们使用颅内脑电图(iEEG)在癫痫手术前从 9 个杏仁核中记录事件相关电位(ERPs)和振荡活动,并比较了对单独呈现的声音、单独呈现的面孔或同时呈现的声音和面孔呈现的恐惧、快乐或中性刺激的电生理反应。结果表明,与中性刺激相比,杏仁核对恐惧刺激的反应存在差异,听觉、视觉和视听刺激的潜伏期为 100-200ms。在较晚的潜伏期,即刺激后约 400ms,与单独的听觉或视觉刺激相比,视听信息的杏仁核反应也得到了增强。然而,重要的是,我们在任何双模态反应中都没有发现超或次加性效应的证据。这些结果表明,首先,杏仁核中的情绪处理在听觉、视觉和视听输入的感知处理的早期阶段以全球相似的方式发生;其次,仅在后期阶段才会对多感觉信息产生总体上更大的反应;第三,这种多感觉增益的潜在机制可能反映了对伴随的视觉和听觉输入的纯粹相加反应。我们的发现为跨感觉通路的情绪处理及其在边缘系统中的融合提供了新的见解。