Institute of Marketing and Management, Department of Consumer Behavior, University of Bern, Engehaldenstrasse 4, 3012, Bern, Switzerland.
Max Planck Institute for Human Development, Berlin, Germany.
Behav Res Methods. 2018 Aug;50(4):1446-1460. doi: 10.3758/s13428-017-0996-1.
The goal of this study was to validate AFFDEX and FACET, two algorithms classifying emotions from facial expressions, in iMotions's software suite. In Study 1, pictures of standardized emotional facial expressions from three databases, the Warsaw Set of Emotional Facial Expression Pictures (WSEFEP), the Amsterdam Dynamic Facial Expression Set (ADFES), and the Radboud Faces Database (RaFD), were classified with both modules. Accuracy (Matching Scores) was computed to assess and compare the classification quality. Results show a large variance in accuracy across emotions and databases, with a performance advantage for FACET over AFFDEX. In Study 2, 110 participants' facial expressions were measured while being exposed to emotionally evocative pictures from the International Affective Picture System (IAPS), the Geneva Affective Picture Database (GAPED) and the Radboud Faces Database (RaFD). Accuracy again differed for distinct emotions, and FACET performed better. Overall, iMotions can achieve acceptable accuracy for standardized pictures of prototypical (vs. natural) facial expressions, but performs worse for more natural facial expressions. We discuss potential sources for limited validity and suggest research directions in the broader context of emotion research.
本研究旨在验证 AFFDEX 和 FACET 这两种从面部表情中分类情绪的算法,在 iMotions 的软件套件中的有效性。在研究 1 中,使用两个模块对来自三个数据库的标准化情绪面部表情图片,即华沙情绪面部表情图片集(WSEFEP)、阿姆斯特丹动态面部表情集(ADFES)和拉德堡面部数据库(RaFD)进行分类。为了评估和比较分类质量,计算了准确性(匹配分数)。结果表明,不同情绪和数据库之间的准确性存在很大差异,FACET 的性能优于 AFFDEX。在研究 2 中,110 名参与者在观看国际情感图片系统(IAPS)、日内瓦情感图片数据库(GAPED)和拉德堡面部数据库(RaFD)中的情感唤起图片时,其面部表情被测量。不同的情绪仍然存在准确性差异,而且 FACET 的表现更好。总的来说,iMotions 可以为典型(与自然)面部表情的标准化图片实现可接受的准确性,但对于更自然的面部表情表现稍差。我们讨论了有效性有限的潜在来源,并在更广泛的情感研究背景下提出了研究方向。