Psychological Process Team, BZP, Robotics Project, RIKEN, 2-2-2 Hikaridai, Seika-cho, Soraku-gun, Kyoto 6190288, Japan.
KOHINATA Limited Liability Company, 2-7-3, Tateba, Naniwa-ku, Osaka 5560020, Japan.
Sensors (Basel). 2021 Jun 20;21(12):4222. doi: 10.3390/s21124222.
In the field of affective computing, achieving accurate automatic detection of facial movements is an important issue, and great progress has already been made. However, a systematic evaluation of systems that now have access to the dynamic facial database remains an unmet need. This study compared the performance of three systems (FaceReader, OpenFace, AFARtoolbox) that detect each facial movement corresponding to an action unit (AU) derived from the Facial Action Coding System. All machines could detect the presence of AUs from the dynamic facial database at a level above chance. Moreover, OpenFace and AFAR provided higher area under the receiver operating characteristic curve values compared to FaceReader. In addition, several confusion biases of facial components (e.g., AU12 and AU14) were observed to be related to each automated AU detection system and the static mode was superior to dynamic mode for analyzing the posed facial database. These findings demonstrate the features of prediction patterns for each system and provide guidance for research on facial expressions.
在情感计算领域,实现面部运动的精确自动检测是一个重要问题,目前已经取得了很大的进展。然而,对于现在可以访问动态面部数据库的系统,仍然需要进行系统的评估。本研究比较了三个系统(FaceReader、OpenFace、AFARtoolbox)的性能,这些系统分别检测来自面部动作编码系统的每个与动作单元(AU)对应的面部运动。所有机器都可以在高于机会水平的情况下从动态面部数据库中检测 AU 的存在。此外,与 FaceReader 相比,OpenFace 和 AFAR 提供了更高的接收者操作特征曲线下面积值。此外,还观察到几个面部成分(例如,AU12 和 AU14)的混淆偏差与每个自动 AU 检测系统有关,并且在分析摆拍面部数据库时,静态模式优于动态模式。这些发现展示了每个系统的预测模式的特征,并为面部表情的研究提供了指导。