Erekat Diyala, Hammal Zakia, Siddiqui Maimoon, Dibeklioğlu Hamdi
Department of Computer Engineering, Bilkent University, Ankara, Turkey.
The Robotics Institute, Carnegie Mellon University, Pittsburgh, USA.
Proc ACM Int Conf Multimodal Interact. 2020 Oct;2020:156-164. doi: 10.1145/3395035.3425190.
The standard clinical assessment of pain is limited primarily to self-reported pain or clinician impression. While the self-reported measurement of pain is useful, in some circumstances it cannot be obtained. Automatic facial expression analysis has emerged as a potential solution for an objective, reliable, and valid measurement of pain. In this study, we propose a video based approach for the automatic measurement of self-reported pain and the observer pain intensity, respectively. To this end, we explore the added value of three self-reported pain scales, i.e., the Visual Analog Scale (VAS), the Sensory Scale (SEN), and the Affective Motivational Scale (AFF), as well as the Observer Pain Intensity (OPI) rating for a reliable assessment of pain intensity from facial expression. Using a spatio-temporal Convolutional Neural Network - Recurrent Neural Network (CNN-RNN) architecture, we propose to jointly minimize the mean absolute error of pain scores estimation for each of these scales while maximizing the consistency between them. The reliability of the proposed method is evaluated on the benchmark database for pain measurement from videos, namely, the UNBC-McMaster Pain Archive. Our results show that enforcing the consistency between different self-reported pain intensity scores collected using different pain scales enhances the quality of predictions and improve the state of the art in automatic self-reported pain estimation. The obtained results suggest that automatic assessment of self-reported pain intensity from videos is feasible, and could be used as a complementary instrument to unburden caregivers, specially for vulnerable populations that need constant monitoring.
疼痛的标准临床评估主要局限于自我报告的疼痛或临床医生的印象。虽然自我报告的疼痛测量很有用,但在某些情况下无法获得。自动面部表情分析已成为一种客观、可靠且有效的疼痛测量潜在解决方案。在本研究中,我们提出一种基于视频的方法,分别用于自动测量自我报告的疼痛和观察者的疼痛强度。为此,我们探索三种自我报告疼痛量表的附加价值,即视觉模拟量表(VAS)、感觉量表(SEN)和情感动机量表(AFF)以及观察者疼痛强度(OPI)评分,以从面部表情可靠评估疼痛强度。使用时空卷积神经网络 - 循环神经网络(CNN - RNN)架构,我们建议共同最小化这些量表中每个量表的疼痛评分估计的平均绝对误差,同时最大化它们之间的一致性。所提出方法的可靠性在用于视频疼痛测量的基准数据库即UNBC - 麦克马斯特疼痛档案库上进行评估。我们的结果表明,加强使用不同疼痛量表收集的不同自我报告疼痛强度评分之间的一致性可提高预测质量,并改善自动自我报告疼痛估计的现有技术水平。所得结果表明,从视频中自动评估自我报告的疼痛强度是可行的,并且可以用作减轻护理人员负担的辅助工具,特别是对于需要持续监测的弱势群体。