1 Department of Psychology, University of California, Berkeley.
2 Faculty of Social and Behavioural Sciences, University of Amsterdam.
Psychol Sci Public Interest. 2019 Jul;20(1):69-90. doi: 10.1177/1529100619850176.
What would a comprehensive atlas of human emotions include? For 50 years, scientists have sought to map emotion-related experience, expression, physiology, and recognition in terms of the "basic six"-anger, disgust, fear, happiness, sadness, and surprise. Claims about the relationships between these six emotions and prototypical facial configurations have provided the basis for a long-standing debate over the diagnostic value of expression (for review and latest installment in this debate, see Barrett et al., p. 1). Building on recent empirical findings and methodologies, we offer an alternative conceptual and methodological approach that reveals a richer taxonomy of emotion. Dozens of distinct varieties of emotion are reliably distinguished by language, evoked in distinct circumstances, and perceived in distinct expressions of the face, body, and voice. Traditional models-both the basic six and affective-circumplex model (valence and arousal)-capture a fraction of the systematic variability in emotional response. In contrast, emotion-related responses (e.g., the smile of embarrassment, triumphant postures, sympathetic vocalizations, blends of distinct expressions) can be explained by richer models of emotion. Given these developments, we discuss why tests of a basic-six model of emotion are not tests of the diagnostic value of facial expression more generally. Determining the full extent of what facial expressions can tell us, marginally and in conjunction with other behavioral and contextual cues, will require mapping the high-dimensional, continuous space of facial, bodily, and vocal signals onto richly multifaceted experiences using large-scale statistical modeling and machine-learning methods.
一个全面的人类情感图谱应该包括哪些内容?50 年来,科学家们一直试图从“基本六情”(愤怒、厌恶、恐惧、快乐、悲伤和惊讶)的角度来绘制与情感相关的体验、表达、生理和识别图谱。关于这六种情感与典型面部表情之间关系的说法,为表情的诊断价值的长期争论提供了依据(综述和这一争论的最新进展见 Barrett 等人,第 1 页)。基于最近的实证发现和方法,我们提出了一种替代的概念和方法,揭示了一种更丰富的情感分类法。语言可靠地区分了数十种不同的情感,这些情感在不同的环境中产生,在面部、身体和声音的不同表情中被感知。传统的模型,无论是基本六情模型还是情感环模型(效价和唤醒),都只捕捉到了情感反应中系统可变性的一小部分。相比之下,与情感相关的反应(例如尴尬的微笑、得意的姿势、同情的发声、不同表情的混合)可以用更丰富的情感模型来解释。鉴于这些发展,我们讨论了为什么基本六情情感模型的测试并不是对更广泛的面部表情诊断价值的测试。确定面部表情能告诉我们的全部内容,需要利用大规模的统计建模和机器学习方法,将面部、身体和声音信号的高维连续空间映射到丰富的多方面的体验上,这需要结合其他行为和背景线索,进行边际和联合测试。