Department of Neuroscience, Max Planck Institute for Empirical Aesthetics, Frankfurt/M, Germany.
Max Planck NYU Center for Language, Music, and Emotion, Frankfurt/M, Germany.
Sci Rep. 2021 May 6;11(1):9663. doi: 10.1038/s41598-021-88431-0.
Vocalizations including laughter, cries, moans, or screams constitute a potent source of information about the affective states of others. It is typically conjectured that the higher the intensity of the expressed emotion, the better the classification of affective information. However, attempts to map the relation between affective intensity and inferred meaning are controversial. Based on a newly developed stimulus database of carefully validated non-speech expressions ranging across the entire intensity spectrum from low to peak, we show that the intuition is false. Based on three experiments (N = 90), we demonstrate that intensity in fact has a paradoxical role. Participants were asked to rate and classify the authenticity, intensity and emotion, as well as valence and arousal of the wide range of vocalizations. Listeners are clearly able to infer expressed intensity and arousal; in contrast, and surprisingly, emotion category and valence have a perceptual sweet spot: moderate and strong emotions are clearly categorized, but peak emotions are maximally ambiguous. This finding, which converges with related observations from visual experiments, raises interesting theoretical challenges for the emotion communication literature.
包括笑声、哭声、呻吟声或尖叫声在内的发声,构成了有关他人情感状态的有效信息源。通常认为,表达情感的强度越高,情感信息的分类就越好。然而,尝试映射情感强度和推断意义之间的关系存在争议。基于新开发的精心验证的非言语表达刺激数据库,涵盖从低到高的整个强度范围,我们表明这种直觉是错误的。通过三个实验(N=90),我们证明了强度实际上具有矛盾的作用。参与者被要求对广泛的发声进行评分和分类,包括真实性、强度、情感以及愉悦度和唤醒度。听众显然能够推断出表达的强度和唤醒度;相反,令人惊讶的是,情绪类别和愉悦度具有感知的最佳点:中度和强烈的情绪被清晰地分类,但峰值情绪则最大程度地模糊。这一发现与视觉实验中的相关观察结果相吻合,为情感交流文献提出了有趣的理论挑战。