Suppr超能文献

绘制情感面孔。面部各个部分如何助力成功的情绪识别。

Mapping the emotional face. How individual face parts contribute to successful emotion recognition.

作者信息

Wegrzyn Martin, Vogt Maria, Kireclioglu Berna, Schneider Julia, Kissler Johanna

机构信息

Department of Psychology, Bielefeld University, Bielefeld, Germany.

Center of Excellence Cognitive Interaction Technology (CITEC), Bielefeld University, Bielefeld, Germany.

出版信息

PLoS One. 2017 May 11;12(5):e0177239. doi: 10.1371/journal.pone.0177239. eCollection 2017.

Abstract

Which facial features allow human observers to successfully recognize expressions of emotion? While the eyes and mouth have been frequently shown to be of high importance, research on facial action units has made more precise predictions about the areas involved in displaying each emotion. The present research investigated on a fine-grained level, which physical features are most relied on when decoding facial expressions. In the experiment, individual faces expressing the basic emotions according to Ekman were hidden behind a mask of 48 tiles, which was sequentially uncovered. Participants were instructed to stop the sequence as soon as they recognized the facial expression and assign it the correct label. For each part of the face, its contribution to successful recognition was computed, allowing to visualize the importance of different face areas for each expression. Overall, observers were mostly relying on the eye and mouth regions when successfully recognizing an emotion. Furthermore, the difference in the importance of eyes and mouth allowed to group the expressions in a continuous space, ranging from sadness and fear (reliance on the eyes) to disgust and happiness (mouth). The face parts with highest diagnostic value for expression identification were typically located in areas corresponding to action units from the facial action coding system. A similarity analysis of the usefulness of different face parts for expression recognition demonstrated that faces cluster according to the emotion they express, rather than by low-level physical features. Also, expressions relying more on the eyes or mouth region were in close proximity in the constructed similarity space. These analyses help to better understand how human observers process expressions of emotion, by delineating the mapping from facial features to psychological representation.

摘要

哪些面部特征能让人类观察者成功识别情感表达?虽然眼睛和嘴巴经常被证明非常重要,但对面部动作单元的研究已经对每种情感表达所涉及的区域做出了更精确的预测。本研究在细粒度水平上进行了调查,探究在解码面部表情时最依赖哪些身体特征。在实验中,按照埃克曼的分类法表达基本情感的个体面部被隐藏在一个由48块瓷砖组成的面具后面,这个面具会依次揭开。参与者被要求一旦识别出面部表情就停止序列,并为其赋予正确的标签。对于面部的每个部分,计算其对成功识别的贡献,从而能够直观呈现不同面部区域对每种表情的重要性。总体而言,观察者在成功识别情感时大多依赖眼睛和嘴巴区域。此外,眼睛和嘴巴重要性的差异使得能够在一个连续空间中对表情进行分组,范围从悲伤和恐惧(依赖眼睛)到厌恶和快乐(依赖嘴巴)。对表情识别具有最高诊断价值的面部部分通常位于与面部动作编码系统中的动作单元相对应的区域。对不同面部部分在表情识别中的有用性进行的相似性分析表明,面部是根据它们所表达的情感进行聚类的,而不是根据低级别的身体特征。而且,在构建的相似性空间中,更多依赖眼睛或嘴巴区域的表情彼此相邻。这些分析有助于更好地理解人类观察者如何处理情感表达,即通过描绘从面部特征到心理表征的映射。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ad31/5426715/3c8143277603/pone.0177239.g001.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验