Hammal Zakia, Chu Wen-Sheng, Cohn Jeffrey F, Heike Carrie, Speltz Matthew L
Robotics Institute, Carnegie Mellon University, Pittsburgh, USA.
Department of Psychology, University of Pittsburgh, Pittsburgh, USA.
Int Conf Affect Comput Intell Interact Workshops. 2017 Oct;2017:216-221. doi: 10.1109/ACII.2017.8273603. Epub 2018 Feb 1.
Action unit detection in infants relative to adults presents unique challenges. Jaw contour is less distinct, facial texture is reduced, and rapid and unusual facial movements are common. To detect facial action units in spontaneous behavior of infants, we propose a multi-label Convolutional Neural Network (CNN). Eighty-six infants were recorded during tasks intended to elicit enjoyment and frustration. Using an extension of FACS for infants (Baby FACS), over 230,000 frames were manually coded for ground truth. To control for chance agreement, inter-observer agreement between Baby-FACS coders was quantified using free-margin kappa. Kappa coefficients ranged from 0.79 to 0.93, which represents high agreement. The multi-label CNN achieved comparable agreement with manual coding. Kappa ranged from 0.69 to 0.93. Importantly, the CNN-based AU detection revealed the same change in findings with respect to infant expressiveness between tasks. While further research is needed, these findings suggest that automatic AU detection in infants is a viable alternative to manual coding of infant facial expression.
相对于成年人,婴儿的动作单元检测存在独特的挑战。婴儿的下颌轮廓不太明显,面部纹理减少,快速且异常的面部动作很常见。为了检测婴儿自发行为中的面部动作单元,我们提出了一种多标签卷积神经网络(CNN)。在旨在引发愉悦和沮丧情绪的任务中,对86名婴儿进行了记录。使用针对婴儿的FACS扩展版(婴儿FACS),超过230,000帧被手动编码作为真实数据。为了控制偶然一致性,使用自由边际kappa对婴儿FACS编码员之间的观察者间一致性进行了量化。kappa系数范围从0.79到0.93,这代表高度一致性。多标签CNN与手动编码达成了可比的一致性。kappa范围从0.69到0.93。重要的是,基于CNN的动作单元检测揭示了任务之间婴儿表现力方面相同的发现变化。虽然还需要进一步研究,但这些发现表明,婴儿动作单元的自动检测是婴儿面部表情手动编码的可行替代方法。