Developmental Ethology and Cognitive Psychology Lab, Centre des Sciences du Goût et de l'Alimentation, AgroSup Dijon, CNRS, Inrae, Université Bourgogne Franche-Comté, Dijon, France.
Centre Ressource de Réhabilitation Psychosociale et de Remédiation Cognitive, Centre Hospitalier Le Vinatier & Université Lyon 1 (CNRS UMR 5229), Université de Lyon, Lyon, France.
PLoS One. 2021 Jan 26;16(1):e0245777. doi: 10.1371/journal.pone.0245777. eCollection 2021.
Recognizing facial expressions of emotions is a fundamental ability for adaptation to the social environment. To date, it remains unclear whether the spatial distribution of eye movements predicts accurate recognition or, on the contrary, confusion in the recognition of facial emotions. In the present study, we asked participants to recognize facial emotions while monitoring their gaze behavior using eye-tracking technology. In Experiment 1a, 40 participants (20 women) performed a classic facial emotion recognition task with a 5-choice procedure (anger, disgust, fear, happiness, sadness). In Experiment 1b, a second group of 40 participants (20 women) was exposed to the same materials and procedure except that they were instructed to say whether (i.e., Yes/No response) the face expressed a specific emotion (e.g., anger), with the five emotion categories tested in distinct blocks. In Experiment 2, two groups of 32 participants performed the same task as in Experiment 1a while exposed to partial facial expressions composed of actions units (AUs) present or absent in some parts of the face (top, middle, or bottom). The coding of the AUs produced by the models showed complex facial configurations for most emotional expressions, with several AUs in common. Eye-tracking data indicated that relevant facial actions were actively gazed at by the decoders during both accurate recognition and errors. False recognition was mainly associated with the additional visual exploration of less relevant facial actions in regions containing ambiguous AUs or AUs relevant to other emotional expressions. Finally, the recognition of facial emotions from partial expressions showed that no single facial actions were necessary to effectively communicate an emotional state. In contrast, the recognition of facial emotions relied on the integration of a complex set of facial cues.
识别面部表情是适应社会环境的基本能力。迄今为止,尚不清楚眼球运动的空间分布是否预测准确识别,或者相反,对面部情绪识别的混淆。在本研究中,我们要求参与者在使用眼动追踪技术监测注视行为的同时识别面部表情。在实验 1a 中,40 名参与者(20 名女性)通过 5 项选择程序(愤怒、厌恶、恐惧、快乐、悲伤)进行了经典的面部情绪识别任务。在实验 1b 中,第二组 40 名参与者(20 名女性)接受了相同的材料和程序,但他们被指示说(即,是/否反应)面孔是否表达了特定的情绪(例如,愤怒),测试了五个情绪类别。在实验 2 中,两组 32 名参与者执行了与实验 1a 相同的任务,同时暴露于由面部动作单元(AUs)组成的部分面部表情,这些动作单元在面部的某些部分存在或不存在(顶部、中部或底部)。模型产生的 AUs 编码显示出大多数情绪表达的复杂面部配置,其中有几个 AUs 是共同的。眼动追踪数据表明,在准确识别和错误时,解码器主动注视相关的面部动作。错误识别主要与对包含模糊 AUs 或与其他情绪表达相关的 AUs 的较少相关面部动作的额外视觉探索有关。最后,从部分表情识别面部情绪表明,没有单一的面部动作是有效传达情绪状态所必需的。相反,面部情绪的识别依赖于一组复杂的面部线索的整合。