Suppr超能文献

基于人类感知和生物信号的人为和自然微笑识别。

Human perception and biosignal-based identification of posed and spontaneous smiles.

机构信息

Communication Science Laboratories, NTT, Atsugi, Kanagawa, Japan.

Artificial Intelligence Laboratory, University of Tsukuba, Tsukuba, Ibaraki, Japan.

出版信息

PLoS One. 2019 Dec 12;14(12):e0226328. doi: 10.1371/journal.pone.0226328. eCollection 2019.

Abstract

Facial expressions are behavioural cues that represent an affective state. Because of this, they are an unobtrusive alternative to affective self-report. The perceptual identification of facial expressions can be performed automatically with technological assistance. Once the facial expressions have been identified, the interpretation is usually left to a field expert. However, facial expressions do not always represent the felt affect; they can also be a communication tool. Therefore, facial expression measurements are prone to the same biases as self-report. Hence, the automatic measurement of human affect should also make inferences on the nature of the facial expressions instead of describing facial movements only. We present two experiments designed to assess whether such automated inferential judgment could be advantageous. In particular, we investigated the differences between posed and spontaneous smiles. The aim of the first experiment was to elicit both types of expressions. In contrast to other studies, the temporal dynamics of the elicited posed expression were not constrained by the eliciting instruction. Electromyography (EMG) was used to automatically discriminate between them. Spontaneous smiles were found to differ from posed smiles in magnitude, onset time, and onset and offset speed independently of the producer's ethnicity. Agreement between the expression type and EMG-based automatic detection reached 94% accuracy. Finally, measurements of the agreement between human video coders showed that although agreement on perceptual labels is fairly good, the agreement worsens with inferential labels. A second experiment confirmed that a layperson's accuracy as regards distinguishing posed from spontaneous smiles is poor. Therefore, the automatic identification of inferential labels would be beneficial in terms of affective assessments and further research on this topic.

摘要

面部表情是代表情感状态的行为线索。正因为如此,它们是情感自我报告的一种不显眼的替代方式。面部表情的感知识别可以通过技术辅助自动完成。一旦识别出面部表情,通常由领域专家进行解释。然而,面部表情并不总是代表真实的情感;它们也可以是一种沟通工具。因此,面部表情的测量也容易受到与自我报告相同的偏差的影响。因此,人类情感的自动测量也应该对面部表情的性质进行推断,而不仅仅是描述面部运动。我们提出了两个旨在评估这种自动推理判断是否有利的实验。特别是,我们研究了摆拍和自然微笑之间的差异。第一个实验的目的是引发这两种类型的表情。与其他研究不同,引发的摆拍表情的时间动态不受引发指令的限制。肌电图 (EMG) 用于自动区分它们。自发的微笑在幅度、起始时间以及起始和结束速度上与摆拍的微笑不同,与制造者的种族无关。表情类型和基于 EMG 的自动检测之间的一致性达到了 94%的准确率。最后,人类视频编码器之间的测量结果表明,尽管对感知标签的一致性相当好,但随着推理标签的增加,一致性会变差。第二个实验证实,非专业人士区分摆拍和自然微笑的准确性较差。因此,在情感评估和进一步研究该主题方面,自动识别推理标签将是有益的。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4123/6907846/32a65d8b11c4/pone.0226328.g001.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验