Grass Annika, Bayer Mareike, Schacht Annekathrin
Courant Research Centre Text Structures, University of GöttingenGöttingen, Germany; Leibniz-ScienceCampus Primate CognitionGöttingen, Germany.
Courant Research Centre Text Structures, University of Göttingen Göttingen, Germany.
Front Hum Neurosci. 2016 Jul 4;10:326. doi: 10.3389/fnhum.2016.00326. eCollection 2016.
For visual stimuli of emotional content as pictures and written words, stimulus size has been shown to increase emotion effects in the early posterior negativity (EPN), a component of event-related potentials (ERPs) indexing attention allocation during visual sensory encoding. In the present study, we addressed the question whether this enhanced relevance of larger (visual) stimuli might generalize to the auditory domain and whether auditory emotion effects are modulated by volume. Therefore, subjects were listening to spoken words with emotional or neutral content, played at two different volume levels, while ERPs were recorded. Negative emotional content led to an increased frontal positivity and parieto-occipital negativity-a scalp distribution similar to the EPN-between ~370 and 530 ms. Importantly, this emotion-related ERP component was not modulated by differences in volume level, which impacted early auditory processing, as reflected in increased amplitudes of the N1 (80-130 ms) and P2 (130-265 ms) components as hypothesized. However, contrary to effects of stimulus size in the visual domain, volume level did not influence later ERP components. These findings indicate modality-specific and functionally independent processing triggered by emotional content of spoken words and volume level.
对于诸如图片和书面文字等具有情感内容的视觉刺激,刺激大小已被证明会增强早期后负波(EPN)中的情感效应,EPN是事件相关电位(ERP)的一个成分,在视觉感觉编码过程中指示注意力分配。在本研究中,我们探讨了更大(视觉)刺激的这种增强的相关性是否可能推广到听觉领域,以及听觉情感效应是否受音量调节。因此,让受试者聆听带有情感或中性内容的口语单词,以两种不同的音量水平播放,同时记录ERP。负面情感内容在约370至530毫秒之间导致额叶正波增加和顶枕叶负波增加——头皮分布类似于EPN。重要的是,这种与情感相关的ERP成分不受音量水平差异的调节,音量水平影响早期听觉处理,如假设的那样,表现为N1(80 - 130毫秒)和P2(130 - 265毫秒)成分的振幅增加。然而,与视觉领域中刺激大小的效应相反,音量水平并未影响后期的ERP成分。这些发现表明,口语单词的情感内容和音量水平引发了特定模态且功能独立的处理过程。