Birkbeck College, University of London, London WC1E 7HX, United Kingdom.
J Cogn Neurosci. 2010 Mar;22(3):474-81. doi: 10.1162/jocn.2009.21215.
The rapid detection of affective signals from conspecifics is crucial for the survival of humans and other animals; if those around you are scared, there is reason for you to be alert and to prepare for impending danger. Previous research has shown that the human brain detects emotional faces within 150 msec of exposure, indicating a rapid differentiation of visual social signals based on emotional content. Here we use event-related brain potential (ERP) measures to show for the first time that this mechanism extends to the auditory domain, using human nonverbal vocalizations, such as screams. An early fronto-central positivity to fearful vocalizations compared with spectrally rotated and thus acoustically matched versions of the same sounds started 150 msec after stimulus onset. This effect was also observed for other vocalized emotions (achievement and disgust), but not for affectively neutral vocalizations, and was linked to the perceived arousal of an emotion category. That the timing, polarity, and scalp distribution of this new ERP correlate are similar to ERP markers of emotional face processing suggests that common supramodal brain mechanisms may be involved in the rapid detection of affectively relevant visual and auditory signals.
同类间情感信号的快速检测对人类和其他动物的生存至关重要;如果周围的人感到害怕,你就有理由保持警惕,为即将到来的危险做好准备。先前的研究表明,人类大脑在暴露于情绪面孔后的 150 毫秒内就能识别出这些面孔,这表明大脑能够根据情绪内容快速区分视觉社交信号。在这里,我们首次使用事件相关脑电位 (ERP) 测量结果表明,这种机制延伸到了听觉领域,使用了人类的非言语发声,如尖叫。与频谱旋转且在听觉上匹配的相同声音相比,对恐惧发声的早期额中央正性波在刺激开始后 150 毫秒出现。这种效应也在其他发声情绪(成就感和厌恶感)中观察到,但在没有情感的发声中没有观察到,这与对情绪类别的感知唤醒有关。这种新的 ERP 相关性的时间、极性和头皮分布与情绪面孔处理的 ERP 标记相似,这表明共同的超模式大脑机制可能参与了对相关视觉和听觉信号的快速检测。