Suppr超能文献

面部表情情感的分辨

The resolution of facial expressions of emotion.

作者信息

Du Shichuan, Martinez Aleix M

机构信息

The Ohio State University, Columbus, OH 43210, USA.

出版信息

J Vis. 2011 Nov 30;11(13):24. doi: 10.1167/11.13.24.

Abstract

Much is known on how facial expressions of emotion are produced, including which individual muscles are most active in each expression. Yet, little is known on how this information is interpreted by the human visual system. This paper presents a systematic study of the image dimensionality of facial expressions of emotion. In particular, we investigate how recognition degrades when the resolution of the image (i.e., number of pixels when seen as a 5.3 by 8 degree stimulus) is reduced. We show that recognition is only impaired in practice when the image resolution goes below 20 × 30 pixels. A study of the confusion tables demonstrates that each expression of emotion is consistently confused by a small set of alternatives and that the confusion is not symmetric, i.e., misclassifying emotion a as b does not imply we will mistake b for a. This asymmetric pattern is consistent over the different image resolutions and cannot be explained by the similarity of muscle activation. Furthermore, although women are generally better at recognizing expressions of emotion at all resolutions, the asymmetry patterns are the same. We discuss the implications of these results for current models of face perception.

摘要

关于情绪的面部表情是如何产生的,我们已经了解很多,包括每种表情中哪些个体肌肉最为活跃。然而,对于人类视觉系统如何解读这些信息,我们却知之甚少。本文对面部情绪表情的图像维度进行了系统研究。具体而言,我们研究了图像分辨率(即视为5.3×8度刺激时的像素数量)降低时识别能力如何下降。我们表明,实际上只有当图像分辨率低于20×30像素时,识别能力才会受损。对混淆矩阵的研究表明,每种情绪表情始终会被一小部分替代表情混淆,并且这种混淆并不对称,即把情绪a误分类为b并不意味着我们会把b误认为a。这种不对称模式在不同的图像分辨率下是一致的,并且无法用肌肉激活的相似性来解释。此外,尽管女性通常在所有分辨率下都更擅长识别情绪表情,但不对称模式是相同的。我们讨论了这些结果对当前面部感知模型的影响。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验