Suppr超能文献

基于数据的传达好消息和坏消息时的自然面部表情特征。

A data-driven characterisation of natural facial expressions when giving good and bad news.

机构信息

School of Psychology, University of Nottingham, Nottingham, United Kingdom.

出版信息

PLoS Comput Biol. 2020 Oct 28;16(10):e1008335. doi: 10.1371/journal.pcbi.1008335. eCollection 2020 Oct.

Abstract

Facial expressions carry key information about an individual's emotional state. Research into the perception of facial emotions typically employs static images of a small number of artificially posed expressions taken under tightly controlled experimental conditions. However, such approaches risk missing potentially important facial signals and within-person variability in expressions. The extent to which patterns of emotional variance in such images resemble more natural ambient facial expressions remains unclear. Here we advance a novel protocol for eliciting natural expressions from dynamic faces, using a dimension of emotional valence as a test case. Subjects were video recorded while delivering either positive or negative news to camera, but were not instructed to deliberately or artificially pose any specific expressions or actions. A PCA-based active appearance model was used to capture the key dimensions of facial variance across frames. Linear discriminant analysis distinguished facial change determined by the emotional valence of the message, and this also generalised across subjects. By sampling along the discriminant dimension, and back-projecting into the image space, we extracted a behaviourally interpretable dimension of emotional valence. This dimension highlighted changes commonly represented in traditional face stimuli such as variation in the internal features of the face, but also key postural changes that would typically be controlled away such as a dipping versus raising of the head posture from negative to positive valences. These results highlight the importance of natural patterns of facial behaviour in emotional expressions, and demonstrate the efficacy of using data-driven approaches to study the representation of these cues by the perceptual system. The protocol and model described here could be readily extended to other emotional and non-emotional dimensions of facial variance.

摘要

面部表情携带着关于个体情绪状态的关键信息。对面部表情感知的研究通常采用在严格控制的实验条件下拍摄的少量人为摆出的静态表情图像。然而,这种方法可能会错过潜在的重要面部信号和个体内表情的可变性。这些图像中情感变化的模式在多大程度上类似于更自然的环境面部表情仍然不清楚。在这里,我们提出了一种从动态面部中引出自然表情的新方案,以情感效价的一个维度作为测试案例。在向摄像机传递积极或消极的消息时,受试者被视频记录,但没有被指示故意或人为地摆出任何特定的表情或动作。基于 PCA 的主动外观模型用于捕捉整个帧中面部变化的关键维度。线性判别分析区分了由消息的情感效价决定的面部变化,这也适用于不同的个体。通过沿着判别维度进行采样,并将其反向投影到图像空间中,我们提取了一个具有行为解释力的情感效价维度。这个维度突出了在传统面部刺激中常见的变化,如面部内部特征的变化,但也突出了通常会被控制的关键姿势变化,例如从负向到正向效价时头部姿势的降低或升高。这些结果强调了自然面部行为模式在情感表达中的重要性,并证明了使用数据驱动的方法来研究感知系统对这些线索的表示的有效性。这里描述的协议和模型可以很容易地扩展到面部变化的其他情感和非情感维度。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/235d/7652307/4cb165cf7532/pcbi.1008335.g001.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验