Pye Annie, Bestelmeyer Patricia E G
School of Psychology, Bangor University, Gwynedd LL57 2AS, UK.
School of Psychology, Bangor University, Gwynedd LL57 2AS, UK.
Cognition. 2015 Jan;134:245-51. doi: 10.1016/j.cognition.2014.11.001. Epub 2014 Nov 19.
Successful social interaction hinges on accurate perception of emotional signals. These signals are typically conveyed multi-modally by the face and voice. Previous research has demonstrated uni-modal contrastive aftereffects for emotionally expressive faces or voices. Here we were interested in whether these aftereffects transfer across modality as theoretical models predict. We show that adaptation to facial expressions elicits significant auditory aftereffects. Adaptation to angry facial expressions caused ambiguous vocal stimuli drawn from an anger-fear morphed continuum to be perceived as less angry and more fearful relative to adaptation to fearful faces. In a second experiment, we demonstrate that these aftereffects are not dependent on learned face-voice congruence, i.e. adaptation to one facial identity transferred to an unmatched voice identity. Taken together, our findings provide support for a supra-modal representation of emotion and suggest further that identity and emotion may be processed independently from one another, at least at the supra-modal level of the processing hierarchy.
成功的社交互动取决于对情感信号的准确感知。这些信号通常通过面部和声音以多模态的方式传达。先前的研究已经证明了在情感表达的面部或声音上存在单模态对比后效应。在这里,我们感兴趣的是,正如理论模型所预测的那样,这些后效应是否会跨模态转移。我们发现,对面部表情的适应会引发显著的听觉后效应。与适应恐惧的面部表情相比,适应愤怒的面部表情会使来自愤怒-恐惧渐变连续体的模糊声音刺激被感知为不那么愤怒且更恐惧。在第二个实验中,我们证明这些后效应并不依赖于习得的面部-声音一致性,即对一种面部身份的适应会转移到不匹配的声音身份上。综上所述,我们的研究结果为情感的超模态表征提供了支持,并进一步表明身份和情感可能至少在处理层次结构的超模态水平上彼此独立地进行处理。