Suppr超能文献

笑脸识别:人类观察者与自动面部分析

Discrimination between smiling faces: Human observers vs. automated face analysis.

作者信息

Del Líbano Mario, Calvo Manuel G, Fernández-Martín Andrés, Recio Guillermo

机构信息

Universidad de Burgos, Burgos, Spain.

Universidad de La Laguna, Tenerife, Spain.

出版信息

Acta Psychol (Amst). 2018 Jun;187:19-29. doi: 10.1016/j.actpsy.2018.04.019. Epub 2018 May 11.

Abstract

This study investigated (a) how prototypical happy faces (with happy eyes and a smile) can be discriminated from blended expressions with a smile but non-happy eyes, depending on type and intensity of the eye expression; and (b) how smile discrimination differs for human perceivers versus automated face analysis, depending on affective valence and morphological facial features. Human observers categorized faces as happy or non-happy, or rated their valence. Automated analysis (FACET software) computed seven expressions (including joy/happiness) and 20 facial action units (AUs). Physical properties (low-level image statistics and visual saliency) of the face stimuli were controlled. Results revealed, first, that some blended expressions (especially, with angry eyes) had lower discrimination thresholds (i.e., they were identified as "non-happy" at lower non-happy eye intensities) than others (especially, with neutral eyes). Second, discrimination sensitivity was better for human perceivers than for automated FACET analysis. As an additional finding, affective valence predicted human discrimination performance, whereas morphological AUs predicted FACET discrimination. FACET can be a valid tool for categorizing prototypical expressions, but is currently more limited than human observers for discrimination of blended expressions. Configural processing facilitates detection of in/congruence(s) across regions, and thus detection of non-genuine smiling faces (due to non-happy eyes).

摘要

本研究调查了

(a) 根据眼睛表情的类型和强度,典型的快乐面孔(眼睛快乐且面带微笑)如何与带有微笑但眼睛不快乐的混合表情区分开来;以及(b) 人类感知者与自动面部分析在微笑辨别方面的差异,这取决于情感效价和面部形态特征。人类观察者将面孔分类为快乐或不快乐,或对其效价进行评分。自动分析(FACET软件)计算了七种表情(包括喜悦/快乐)和20个面部动作单元(AUs)。对面部刺激的物理属性(低级图像统计和视觉显著性)进行了控制。结果显示,首先,一些混合表情(特别是带有愤怒眼睛的表情)比其他表情(特别是带有中性眼睛的表情)具有更低的辨别阈值(即,在较低的不快乐眼睛强度下被识别为“不快乐”)。其次,人类感知者的辨别敏感性优于自动FACET分析。作为一项额外发现,情感效价预测了人类的辨别表现,而形态学AUs预测了FACET的辨别。FACET可以是对典型表情进行分类的有效工具,但目前在辨别混合表情方面比人类观察者更有限。构型处理有助于检测不同区域之间的一致/不一致,从而检测出非真诚的笑脸(由于眼睛不快乐)。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验