Institute of Applied Psychology, School of Public Administration, Guangdong University of Finance, 510521, Guangzhou, China; Laboratory for Behavioral and Regional Finance, Guangdong University of Finance, 510521, Guangzhou, China.
School of Education, Guangdong University of Education, 510303, Guangzhou, China.
Neuropsychologia. 2019 Sep;132:107147. doi: 10.1016/j.neuropsychologia.2019.107147. Epub 2019 Jul 17.
It has been shown that stimulus memory (e.g., encoding and recognition) is influenced by emotion. In terms of face memory, event-related potential (ERP) studies have shown that the encoding of emotional faces is influenced by the emotion of concomitant context, when contextual stimuli were input from a visual modality. Behavioral studies also investigated the effect of contextual emotion on subsequent recognition of neutral faces. However, there might be no studies ever investigating the context effect on face encoding and recognition, when contextual stimuli were input from other sensory modalities (e.g., an auditory modality). Additionally, it may be unknown about the neural mechanisms underlying context effects on recognition of emotional faces. Therefore, the present study aimed to use vocal expressions as contexts to investigate whether contextual emotion influences ERP responses during face encoding and recognition. To this end, participants in the present study were asked to memorize angry and neutral faces. The faces were presented concomitant with either angry or neutral vocal expressions. Subsequently, participants were asked to perform an old/new recognition task, in which only faces were presented. In the encoding phase, ERP results showed that compared to neutral vocal expression, angry vocal expressions led to smaller P1 and N170 responses to both angry and neutral faces. For angry faces, however, late positive potential (LPP) responses were increased in the angry voice condition. In the later recognition phase, N170 responses were larger for neutral-encoded faces that had been presented with angry compared to neutral vocal expressions. Preceding angry vocal expression increased FN400 and LPP responses to both neutral-encoded and angry-encoded faces, when the faces showed the encoded expression. Therefore, the present study indicates that contextual emotion with regard to vocal expression influences neural responses during face encoding and subsequent recognition.
已证实,刺激记忆(例如,编码和识别)会受到情绪的影响。在面部记忆方面,事件相关电位(ERP)研究表明,当从视觉模态输入上下文刺激时,情绪伴随的上下文会影响情绪面孔的编码。行为研究也调查了上下文情绪对随后识别中性面孔的影响。然而,可能没有研究调查过当上下文刺激来自其他感觉模态(例如听觉模态)时,上下文情绪对面部编码和识别的影响。此外,可能不知道上下文效应对情绪面孔识别的神经机制。因此,本研究旨在使用声音表达作为上下文,来研究上下文情绪是否会影响面部编码和识别期间的 ERP 反应。为此,本研究中的参与者被要求记住愤怒和中性面孔。面孔与愤怒或中性声音同时呈现。随后,要求参与者执行旧/新识别任务,其中仅呈现面孔。在编码阶段,ERP 结果表明,与中性声音表达相比,愤怒声音表达导致对愤怒和中性面孔的 P1 和 N170 反应减小。然而,对于愤怒面孔,在愤怒声音条件下,晚期正电位(LPP)反应增加。在后期识别阶段,当呈现与编码表情相匹配的面孔时,对于用愤怒声音表达编码的中性面孔,N170 反应更大。在呈现愤怒声音表达之前,会增加对中性编码和愤怒编码面孔的 FN400 和 LPP 反应,即使这些面孔呈现的是编码表情。因此,本研究表明,与声音表达有关的上下文情绪会影响面部编码和随后识别期间的神经反应。