Ziereis Annika, Schacht Anne
Department for Cognition, Emotion and Behavior, Affective Neuroscience and Psychophysiology Laboratory, Institute of Psychology, Georg-August-University of Göttingen, Göttingen, Germany.
Psychophysiology. 2023 Nov;60(11):e14380. doi: 10.1111/psyp.14380. Epub 2023 Jun 30.
Social and emotional cues from faces and voices are highly relevant and have been reliably demonstrated to attract attention involuntarily. However, there are mixed findings as to which degree associating emotional valence to faces occurs automatically. In the present study, we tested whether inherently neutral faces gain additional relevance by being conditioned with either positive, negative, or neutral vocal affect bursts. During learning, participants performed a gender-matching task on face-voice pairs without explicit emotion judgments of the voices. In the test session on a subsequent day, only the previously associated faces were presented and had to be categorized regarding gender. We analyzed event-related potentials (ERPs), pupil diameter, and response times (RTs) of N = 32 subjects. Emotion effects were found in auditory ERPs and RTs during the learning session, suggesting that task-irrelevant emotion was automatically processed. However, ERPs time-locked to the conditioned faces were mainly modulated by the task-relevant information, that is, the gender congruence of the face and voice, but not by emotion. Importantly, these ERP and RT effects of learned congruence were not limited to learning but extended to the test session, that is, after removing the auditory stimuli. These findings indicate successful associative learning in our paradigm, but it did not extend to the task-irrelevant dimension of emotional relevance. Therefore, cross-modal associations of emotional relevance may not be completely automatic, even though the emotion was processed in the voice.
来自面部和声音的社会和情感线索高度相关,并且已经得到可靠证明,能够不由自主地吸引注意力。然而,关于面部与情感效价的关联在何种程度上是自动发生的,研究结果并不一致。在本研究中,我们测试了本质上中性的面部是否通过与积极、消极或中性的声音情感爆发相结合而获得额外的关联性。在学习过程中,参与者对面孔-声音对执行性别匹配任务,而无需对声音进行明确的情感判断。在随后一天的测试环节中,只呈现之前关联过的面孔,参与者必须对面孔的性别进行分类。我们分析了N = 32名受试者的事件相关电位(ERP)、瞳孔直径和反应时间(RT)。在学习环节中,在听觉ERP和RT中发现了情感效应,这表明与任务无关的情感被自动处理了。然而,与条件化面孔锁时的ERP主要受与任务相关的信息调制,即面孔和声音的性别一致性,而非情感。重要的是,学习到的一致性所产生的这些ERP和RT效应并不局限于学习阶段,而是延伸到了测试环节,即在去除听觉刺激之后。这些发现表明我们的范式中存在成功的联想学习,但它并没有扩展到情感相关性这个与任务无关的维度。因此,情感相关性的跨模态关联可能并非完全自动,即使情感在声音中得到了处理。