Department of Psychiatry, RWTH Aachen University, Aachen, Germany.
Rev Neurosci. 2012;23(4):381-92. doi: 10.1515/revneuro-2012-0040.
In our everyday lives, we perceive emotional information via multiple sensory channels. This is particularly evident for emotional faces and voices in a social context. Over the past years, a multitude of studies have addressed the question of how affective cues conveyed by auditory and visual channels are integrated. Behavioral studies show that hearing and seeing emotional expressions can support and influence each other, a notion which is supported by investigations on the underlying neurobiology. Numerous electrophysiological and neuroimaging studies have identified brain regions subserving the integration of multimodal emotions and have provided new insights into the neural processing steps underlying the synergistic confluence of affective information from voice and face. In this paper we provide a comprehensive review covering current behavioral, electrophysiological and functional neuroimaging findings on the combination of emotions from the auditory and visual domains. Behavioral advantages arising from multimodal redundancy are paralleled by specific integration patterns on the neural level, from encoding in early sensory cortices to late cognitive evaluation in higher association areas. In summary, these findings indicate that bimodal emotions interact at multiple stages of the audiovisual integration process.
在日常生活中,我们通过多种感官通道来感知情感信息。在社交环境中,这对于情感面孔和声音尤为明显。在过去的几年中,大量研究已经解决了听觉和视觉通道传达的情感线索如何整合的问题。行为研究表明,听觉和视觉情感表达可以相互支持和影响,这一概念得到了对潜在神经生物学的研究的支持。许多电生理学和神经影像学研究已经确定了用于整合多模态情感的大脑区域,并为协同融合来自声音和面孔的情感信息的神经处理步骤提供了新的见解。在本文中,我们提供了一个全面的综述,涵盖了当前关于听觉和视觉领域情感组合的行为、电生理学和功能神经影像学发现。多模态冗余产生的行为优势与神经水平上的特定整合模式相平行,从早期感觉皮层的编码到高级联想区域的后期认知评估。总之,这些发现表明双模态情感在视听整合过程的多个阶段相互作用。