Wegrzyn Martin, Münst Laura, König Jessica, Dinter Maximilian, Kissler Johanna
Department of Psychology, Bielefeld University, Bielefeld, Germany.
Department of Psychology, Bielefeld University, Bielefeld, Germany.
Acta Psychol (Amst). 2024 Nov;251:104569. doi: 10.1016/j.actpsy.2024.104569. Epub 2024 Nov 2.
According to one prominent model, facial expressions of emotion can be categorized into depicting happiness, disgust, anger, sadness, fear and surprise. One open question is which facial features observers use to recognize the different expressions and whether the features indicated by observers can be used to predict which expression they saw. We created fine-grained maps of diagnostic facial features by asking participants to use mouse clicks to highlight those parts of a face that they deem useful for recognizing its expression. We tested how well the resulting maps align with models of emotion expressions (based on Action Units) and how the maps relate to the accuracy with which observers recognize full or partly masked faces. As expected, observers focused on the eyes and mouth regions in all faces. However, each expression deviated from this global pattern in a unique way, allowing to create maps of diagnostic face regions. Action Units considered most important for expressing an emotion were highlighted most often, indicating their psychological validity. The maps of facial features also allowed to correctly predict which expression a participant had seen, with above-chance accuracies for all expressions. For happiness, fear and anger, the face half which was highlighted the most was also the half whose visibility led to higher recognition accuracies. The results suggest that diagnostic facial features are distributed in unique patterns for each expression, which observers seem to intuitively extract and use when categorizing facial displays of emotion.
根据一个著名的模型,情感的面部表情可分为描绘快乐、厌恶、愤怒、悲伤、恐惧和惊讶。一个悬而未决的问题是观察者使用哪些面部特征来识别不同的表情,以及观察者指出的特征是否可用于预测他们看到的是哪种表情。我们通过要求参与者用鼠标点击来突出他们认为对识别面部表情有用的面部部分,创建了诊断性面部特征的精细地图。我们测试了生成的地图与情感表达模型(基于动作单元)的匹配程度,以及这些地图与观察者识别完整或部分遮挡面部的准确性之间的关系。正如预期的那样,观察者在所有面部中都关注眼睛和嘴巴区域。然而,每种表情都以独特的方式偏离了这种全局模式,从而能够创建诊断性面部区域的地图。对于表达情感被认为最重要的动作单元被突出显示的频率最高,表明了它们的心理有效性。面部特征地图还能够正确预测参与者看到的是哪种表情,对所有表情的预测准确率均高于随机水平。对于快乐、恐惧和愤怒,被突出显示最多的面部半边也是其可见度导致更高识别准确率的半边。结果表明,诊断性面部特征以独特的模式分布于每种表情中,观察者在对面部情感展示进行分类时似乎会直观地提取并使用这些特征。