The Ohio State University, Columbus, OH 43201, USA.
Curr Opin Psychol. 2017 Oct;17:27-33. doi: 10.1016/j.copsyc.2017.06.009. Epub 2017 Jun 21.
Facial expressions of emotion are produced by contracting and relaxing the facial muscles in our face. I hypothesize that the human visual system solves the inverse problem of production, that is, to interpret emotion, the visual system attempts to identify the underlying muscle activations. I show converging computational, behavioral and imaging evidence in favor of this hypothesis. I detail the computations performed by the human visual system to achieve the decoding of these facial actions and identify a brain region where these computations likely take place. The resulting computational model explains how humans readily classify emotions into categories as well as continuous variables. This model also predicts the existence of a large number of previously unknown facial expressions, including compound emotions, affect attributes and mental states that are regularly used by people. I provide evidence in favor of this prediction.
情绪的面部表情是通过收缩和放松我们面部的肌肉产生的。我假设人类视觉系统解决了产生的逆问题,也就是说,为了解释情绪,视觉系统试图识别潜在的肌肉激活。我展示了支持这一假设的计算、行为和成像证据。我详细介绍了人类视觉系统为实现这些面部动作的解码而执行的计算,并确定了这些计算可能发生的大脑区域。由此产生的计算模型解释了人类如何将情绪轻松地分类为类别以及连续变量。该模型还预测了大量以前未知的面部表情的存在,包括复合情绪、情感属性和精神状态,这些表情经常被人们使用。我提供了支持这一预测的证据。