Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, Maastricht 6229 EV, The Netherlands, Centre for Cognitive Neuroimaging, Institute of Neuroscience and Psychology, University of Glasgow, Glasgow, G12 8QB, United Kingdom,
Centre for Cognitive Neuroimaging, Institute of Neuroscience and Psychology, University of Glasgow, Glasgow, G12 8QB, United Kingdom, Neuroscience Institute of Timone, Coeducational Research Unit 7289, National Center of Scientific Research-Aix-Marseille University, F-13284 Marseille, France.
J Neurosci. 2014 May 14;34(20):6813-21. doi: 10.1523/JNEUROSCI.4478-13.2014.
The integration of emotional information from the face and voice of other persons is known to be mediated by a number of "multisensory" cerebral regions, such as the right posterior superior temporal sulcus (pSTS). However, whether multimodal integration in these regions is attributable to interleaved populations of unisensory neurons responding to face or voice or rather by multimodal neurons receiving input from the two modalities is not fully clear. Here, we examine this question using functional magnetic resonance adaptation and dynamic audiovisual stimuli in which emotional information was manipulated parametrically and independently in the face and voice via morphing between angry and happy expressions. Healthy human adult subjects were scanned while performing a happy/angry emotion categorization task on a series of such stimuli included in a fast event-related, continuous carryover design. Subjects integrated both face and voice information when categorizing emotion-although there was a greater weighting of face information-and showed behavioral adaptation effects both within and across modality. Adaptation also occurred at the neural level: in addition to modality-specific adaptation in visual and auditory cortices, we observed for the first time a crossmodal adaptation effect. Specifically, fMRI signal in the right pSTS was reduced in response to a stimulus in which facial emotion was similar to the vocal emotion of the preceding stimulus. These results suggest that the integration of emotional information from face and voice in the pSTS involves a detectable proportion of bimodal neurons that combine inputs from visual and auditory cortices.
已知,来自他人面部和声音的情感信息的整合是由多个“多感官”大脑区域介导的,例如右侧后颞上沟(pSTS)。然而,这些区域中的多模态整合是归因于响应面部或声音的单感神经元的交织群体,还是归因于接收来自两种模态的输入的多模态神经元,尚不完全清楚。在这里,我们使用功能磁共振适应和动态视听刺激来检查这个问题,在这些刺激中,通过在愤怒和快乐表情之间进行变形,可以参数化和独立地在面部和声音中操纵情感信息。当对包含在快速事件相关、连续连续转移设计中的一系列此类刺激进行快乐/愤怒情绪分类任务时,健康的成年人类被试者接受了扫描。当被试者对情绪进行分类时,他们整合了面部和声音信息,尽管对面部信息的权重更大,并且在模态内和跨模态都表现出行为适应效应。适应也发生在神经水平:除了视觉和听觉皮层的特定模态适应外,我们还首次观察到跨模态适应效应。具体来说,在 pSTS 中的 fMRI 信号在响应与前一个刺激的声音情绪相似的面部情绪的刺激时减少。这些结果表明,来自面部和声音的情感信息在 pSTS 中的整合涉及到可检测到的一部分双模态神经元,这些神经元结合了来自视觉和听觉皮层的输入。