Suppr超能文献

八种市售面部情感识别自动分类器的性能比较。

A performance comparison of eight commercially available automatic classifiers for facial affect recognition.

机构信息

Business School, Dublin City University, Dublin, Republic of Ireland.

Department of Experimental Psychology, University College London, London, England, United Kingdom.

出版信息

PLoS One. 2020 Apr 24;15(4):e0231968. doi: 10.1371/journal.pone.0231968. eCollection 2020.

Abstract

In the wake of rapid advances in automatic affect analysis, commercial automatic classifiers for facial affect recognition have attracted considerable attention in recent years. While several options now exist to analyze dynamic video data, less is known about the relative performance of these classifiers, in particular when facial expressions are spontaneous rather than posed. In the present work, we tested eight out-of-the-box automatic classifiers, and compared their emotion recognition performance to that of human observers. A total of 937 videos were sampled from two large databases that conveyed the basic six emotions (happiness, sadness, anger, fear, surprise, and disgust) either in posed (BU-4DFE) or spontaneous (UT-Dallas) form. Results revealed a recognition advantage for human observers over automatic classification. Among the eight classifiers, there was considerable variance in recognition accuracy ranging from 48% to 62%. Subsequent analyses per type of expression revealed that performance by the two best performing classifiers approximated those of human observers, suggesting high agreement for posed expressions. However, classification accuracy was consistently lower (although above chance level) for spontaneous affective behavior. The findings indicate potential shortcomings of existing out-of-the-box classifiers for measuring emotions, and highlight the need for more spontaneous facial databases that can act as a benchmark in the training and testing of automatic emotion recognition systems. We further discuss some limitations of analyzing facial expressions that have been recorded in controlled environments.

摘要

在自动情感分析迅速发展的背景下,近年来商业化的自动面部情感识别分类器引起了广泛关注。虽然现在有几种选项可以分析动态视频数据,但对于这些分类器的相对性能,特别是在面部表情是自发的而不是人为的情况下,人们了解得较少。在本工作中,我们测试了八种开箱即用的自动分类器,并将它们的情感识别性能与人眼观察者进行了比较。总共从两个大型数据库中采样了 937 个视频,这些视频以人为(BU-4DFE)或自发(UT-Dallas)的形式传达了基本的六种情感(快乐、悲伤、愤怒、恐惧、惊讶和厌恶)。结果显示,人眼观察者的识别优势高于自动分类。在这八种分类器中,识别准确率存在很大差异,范围从 48%到 62%不等。随后对每种表情类型的分析表明,两种表现最好的分类器的性能接近人眼观察者的表现,这表明对人为表情的分类具有很高的一致性。然而,对于自发的情感行为,分类准确性始终较低(尽管高于随机水平)。研究结果表明,现有的开箱即用分类器在测量情感方面可能存在一些不足,并强调需要更多的自然面部数据库,这些数据库可以作为自动情感识别系统训练和测试的基准。我们进一步讨论了在受控环境中分析面部表情所存在的一些局限性。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4554/7182192/ba0e83bbe688/pone.0231968.g001.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验