Xu Xiaojing, Craig Kenneth D, Diaz Damaris, Goodwin Matthew S, Akcakaya Murat, Susam Büşra Tuğçe, Huang Jeannie S, de Sa Virginia R
Department of Electrical and Computer Engineering, University of California San Diego, La Jolla, CA, USA,
Department of Psychology, University of British Columbia Vancouver, BC, Canada,
CEUR Workshop Proc. 2018 Jul;2142:10-21.
Accurately determining pain levels in children is difficult, even for trained professionals and parents. Facial activity provides sensitive and specific information about pain, and computer vision algorithms have been developed to automatically detect Facial Action Units (AUs) defined by the Facial Action Coding System (FACS). Our prior work utilized information from computer vision, i.e., automatically detected facial AUs, to develop classifiers to distinguish between pain and no-pain conditions. However, application of pain/no-pain classifiers based on automated AU codings across different environmental domains results in diminished performance. In contrast, classifiers based on manually coded AUs demonstrate reduced environmentally-based variability in performance. In this paper, we train a machine learning model to recognize pain using AUs coded by a computer vision system embedded in a software package called iMotions. We also study the relationship between iMotions (automatically) and human (manually) coded AUs. We find that AUs coded automatically are different from those coded by a human trained in the FACS system, and that the human coder is less sensitive to environmental changes. To improve classification performance in the current work, we applied transfer learning by training another machine learning model to map automated AU codings to a subspace of manual AU codings to enable more robust pain recognition performance when only automatically coded AUs are available for the test data. With this transfer learning method, we improved the Area Under the ROC Curve (AUC) on independent data from new participants in our target domain from 0.67 to 0.72.
准确确定儿童的疼痛程度很困难,即使对于训练有素的专业人员和家长来说也是如此。面部活动能提供有关疼痛的敏感且具体的信息,并且已经开发出计算机视觉算法来自动检测由面部动作编码系统(FACS)定义的面部动作单元(AU)。我们之前的工作利用计算机视觉信息,即自动检测到的面部AU,来开发分类器以区分疼痛和无疼痛状态。然而,基于自动AU编码的疼痛/无疼痛分类器在不同环境领域的应用会导致性能下降。相比之下,基于人工编码AU的分类器在性能上表现出基于环境的变异性更小。在本文中,我们训练了一个机器学习模型,使用嵌入在名为iMotions的软件包中的计算机视觉系统编码的AU来识别疼痛。我们还研究了iMotions(自动)编码的AU与人工编码的AU之间的关系。我们发现自动编码的AU与接受FACS系统培训的人员编码的AU不同,并且人工编码人员对环境变化不太敏感。为了在当前工作中提高分类性能,我们应用迁移学习,通过训练另一个机器学习模型将自动AU编码映射到人工AU编码的子空间,以便在测试数据仅提供自动编码的AU时实现更稳健的疼痛识别性能。通过这种迁移学习方法,我们将目标领域新参与者的独立数据上的ROC曲线下面积(AUC)从0.67提高到了0.72。