Takagi Sachiko, Hiramatsu Saori, Tabei Ken-Ichi, Tanaka Akihiro
Tokyo Woman's Christian University Tokyo, Japan.
Waseda Institute for Advanced Study Tokyo, Japan.
Front Integr Neurosci. 2015 Feb 2;9:1. doi: 10.3389/fnint.2015.00001. eCollection 2015.
Previous studies have shown that the perception of facial and vocal affective expressions interacts with each other. Facial expressions usually dominate vocal expressions when we perceive the emotions of face-voice stimuli. In most of these studies, participants were instructed to pay attention to the face or voice. Few studies compared the perceived emotions with and without specific instructions regarding the modality to which attention should be directed. Also, these studies used combinations of the face and voice which expresses two opposing emotions, which limits the generalizability of the findings. The purpose of this study is to examine whether the emotion perception is modulated by instructions to pay attention to the face or voice using the six basic emotions. Also we examine the modality dominance between the face and voice for each emotion category. Before the experiment, we recorded faces and voices which expresses the six basic emotions and orthogonally combined these faces and voices. Consequently, the emotional valence of visual and auditory information was either congruent or incongruent. In the experiment, there were unisensory and multisensory sessions. The multisensory session was divided into three blocks according to whether an instruction was given to pay attention to a given modality (face attention, voice attention, and no instruction). Participants judged whether the speaker expressed happiness, sadness, anger, fear, disgust, or surprise. Our results revealed that instructions to pay attention to one modality and congruency of the emotions between modalities modulated the modality dominance, and the modality dominance is differed for each emotion category. In particular, the modality dominance for anger changed according to each instruction. Analyses also revealed that the modality dominance suggested by the congruency effect can be explained in terms of the facilitation effect and the interference effect.
以往的研究表明,对面部和声音情感表达的感知会相互作用。当我们感知面部 - 声音刺激的情绪时,面部表情通常主导声音表达。在大多数这些研究中,参与者被指示关注面部或声音。很少有研究比较了有无关于应将注意力指向何种模态的具体指示时所感知到的情绪。此外,这些研究使用了表达两种相反情绪的面部和声音组合,这限制了研究结果的普遍性。本研究的目的是使用六种基本情绪来检验情绪感知是否会受到关注面部或声音的指示的调节。我们还研究了每个情绪类别中面部和声音之间的模态主导性。在实验前,我们录制了表达六种基本情绪的面部和声音,并将这些面部和声音进行正交组合。因此,视觉和听觉信息的情绪效价要么是一致的,要么是不一致的。在实验中,有单感官和多感官环节。多感官环节根据是否给出关注给定模态的指示(面部关注、声音关注和无指示)分为三个块。参与者判断说话者是否表达了快乐、悲伤、愤怒、恐惧、厌恶或惊讶。我们的结果表明,关注一种模态的指示以及模态之间情绪的一致性调节了模态主导性,并且每个情绪类别的模态主导性是不同的。特别是,愤怒的模态主导性根据每个指示而变化。分析还表明,一致性效应所暗示的模态主导性可以用促进效应和干扰效应来解释。