Sauter Disa A, Eisner Frank, Calder Andrew J, Scott Sophie K
Institute of Cognitive Neuroscience, University College London, 17 Queen Square, London, UK.
Q J Exp Psychol (Hove). 2010 Nov;63(11):2251-72. doi: 10.1080/17470211003721642. Epub 2010 Apr 29.
Work on facial expressions of emotions (Calder, Burton, Miller, Young, & Akamatsu, [2001]) and emotionally inflected speech (Banse & Scherer, [1996]) has successfully delineated some of the physical properties that underlie emotion recognition. To identify the acoustic cues used in the perception of nonverbal emotional expressions like laugher and screams, an investigation was conducted into vocal expressions of emotion, using nonverbal vocal analogues of the "basic" emotions (anger, fear, disgust, sadness, and surprise; Ekman & Friesen, [1971]; Scott et al., [1997]), and of positive affective states (Ekman, [1992], [2003]; Sauter & Scott, [2007]). First, the emotional stimuli were categorized and rated to establish that listeners could identify and rate the sounds reliably and to provide confusion matrices. A principal components analysis of the rating data yielded two underlying dimensions, correlating with the perceived valence and arousal of the sounds. Second, acoustic properties of the amplitude, pitch, and spectral profile of the stimuli were measured. A discriminant analysis procedure established that these acoustic measures provided sufficient discrimination between expressions of emotional categories to permit accurate statistical classification. Multiple linear regressions with participants' subjective ratings of the acoustic stimuli showed that all classes of emotional ratings could be predicted by some combination of acoustic measures and that most emotion ratings were predicted by different constellations of acoustic features. The results demonstrate that, similarly to affective signals in facial expressions and emotionally inflected speech, the perceived emotional character of affective vocalizations can be predicted on the basis of their physical features.
关于情绪面部表情的研究(卡尔德、伯顿、米勒、扬和赤松,[2001])以及带有情感色彩的言语研究(班泽和舍雷尔,[1996])已经成功地描绘了一些构成情绪识别基础的物理属性。为了确定用于感知诸如笑声和尖叫声等非言语情感表达的声学线索,研究人员对情绪的发声表达进行了调查,使用了“基本”情绪(愤怒、恐惧、厌恶、悲伤和惊讶;埃克曼和弗里森,[1971];斯科特等人,[1997])以及积极情感状态(埃克曼,[1992],[2003];绍特和斯科特,[2007])的非言语发声类似物。首先,对情感刺激进行分类和评级,以确定听众能够可靠地识别和评级这些声音,并提供混淆矩阵。对评级数据进行主成分分析得出了两个潜在维度,与声音的感知效价和唤醒度相关。其次,测量了刺激的幅度、音高和频谱特征的声学属性。判别分析程序确定,这些声学测量能够在情感类别表达之间提供足够的区分,以进行准确的统计分类。对参与者对声学刺激的主观评级进行多元线性回归分析表明,所有情感评级类别都可以通过某些声学测量的组合来预测,并且大多数情感评级是由不同的声学特征组合预测的。结果表明,与面部表情和带有情感色彩的言语中的情感信号类似,情感发声的感知情感特征可以根据其物理特征来预测。