Suppr超能文献

从感知的角度来看,面部表情可以沿着上下面部轴进行分类。

Facial expressions can be categorized along the upper-lower facial axis, from a perceptual perspective.

机构信息

Department of Psychology, Normal College, Shihezi University, Xinjiang, China.

Department of Psychiatry and Behavioral Sciences, College of Medicine, Medical University of South Carolina, Charleston, SC, USA.

出版信息

Atten Percept Psychophys. 2021 Jul;83(5):2159-2173. doi: 10.3758/s13414-021-02281-6. Epub 2021 Mar 23.

Abstract

A critical question, fundamental for building models of emotion, is how to categorize emotions. Previous studies have typically taken one of two approaches: (a) they focused on the pre-perceptual visual cues, how salient facial features or configurations were displayed; or (b) they focused on the post-perceptual affective experiences, how emotions affected behavior. In this study, we attempted to group emotions at a peri-perceptual processing level: it is well known that humans perceive different facial expressions differently, therefore, can we classify facial expressions into distinct categories in terms of their perceptual similarities? Here, using a novel non-lexical paradigm, we assessed the perceptual dissimilarities between 20 facial expressions using reaction times. Multidimensional-scaling analysis revealed that facial expressions were organized predominantly along the upper-lower face axis. Cluster analysis of behavioral data delineated three superordinate categories, and eye-tracking measurements validated these clustering results. Interestingly, these superordinate categories can be conceptualized according to how facial displays interact with acoustic communications: One group comprises expressions that have salient mouth features. They likely link to species-specific vocalization, for example, crying, laughing. The second group comprises visual displays with diagnosing features in both the mouth and the eye regions. They are not directly articulable but can be expressed prosodically, for example, sad, angry. Expressions in the third group are also whole-face expressions but are completely independent of vocalization, and likely being blends of two or more elementary expressions. We propose a theoretical framework to interpret the tripartite division in which distinct expression subsets are interpreted as successive phases in an evolutionary chain.

摘要

一个关键问题,对于构建情绪模型至关重要,就是如何对情绪进行分类。先前的研究通常采用以下两种方法之一:(a) 关注前知觉视觉线索,即突出的面部特征或配置如何显示;或 (b) 关注后知觉情感体验,即情绪如何影响行为。在这项研究中,我们试图在知觉处理水平上对情绪进行分组:众所周知,人类对不同的面部表情有不同的感知,因此,我们能否根据感知相似性将面部表情分为不同的类别?在这里,我们使用一种新颖的非词汇范式,使用反应时间评估 20 种面部表情之间的感知差异。多维尺度分析显示,面部表情主要沿着上下脸轴组织。行为数据的聚类分析划定了三个上位类别,眼动测量验证了这些聚类结果。有趣的是,这些上位类别可以根据面部显示如何与声音交流相互作用来概念化:一组包括具有明显嘴部特征的表情。它们可能与物种特定的发声相关联,例如,哭泣、大笑。第二组包括在嘴部和眼部区域都具有诊断特征的视觉显示。它们不能直接发音,但可以通过韵律表达,例如,悲伤、愤怒。第三组的表情也是全脸表情,但完全独立于发声,并且可能是两个或更多基本表情的混合。我们提出了一个理论框架来解释三分法,其中不同的表情子集被解释为进化链中的连续阶段。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验