Hausmann Jacqueline, Salekin Md Sirajus, Zamzmi Ghada, Mouton Peter R, Prescott Stephanie, Ho Thao, Sun Y U, Goldgof Dmitry
Department of Computer Science and Engineering, College of Engineering, University of South Florida, Tampa, FL 33620, USA.
SRC Biosciences, Tampa, FL 33606, USA.
IEEE Access. 2024;12:49122-49133. doi: 10.1109/access.2024.3383789. Epub 2024 Apr 1.
There is a tendency for object detection systems using off-the-shelf algorithms to fail when deployed in complex scenes. The present work describes a case for detecting facial expression in post-surgical neonates (newborns) as a modality for predicting and classifying severe pain in the Neonatal Intensive Care Unit (NICU). Our initial testing showed that both an off-the-shelf face detector and a machine learning algorithm trained on adult faces failed to detect facial expression of neonates in the NICU. We improved accuracy in this complex scene by training a state-of-the-art "You-Only-Look-Once" (YOLO) face detection model using the USF-MNPAD-I dataset of neonate faces. At run-time our trained YOLO model showed a difference of 8.6% mean Average Precision (mAP) and 21.2% Area under the ROC Curve (AUC) for automatic classification of neonatal pain compared with manual pain scoring by NICU nurses. Given the challenges, time and effort associated with collecting ground truth from the faces of post-surgical neonates, here we share the weights from training our YOLO model with these facial expression data. These weights can facilitate the further development of accurate strategies for detecting facial expression, which can be used to predict the time to pain onset in combination with other sensory modalities (body movements, crying frequency, vital signs). Reliable predictions of time to pain onset in turn create a therapeutic window of time wherein NICU nurses and providers can implement safe and effective strategies to mitigate severe pain in this vulnerable patient population.
使用现成算法的目标检测系统在部署到复杂场景时往往会失败。目前的工作描述了一种将检测术后新生儿(婴儿)的面部表情作为预测和分类新生儿重症监护病房(NICU)中严重疼痛的一种方式的情况。我们的初步测试表明,现成的面部检测器和在成人面部上训练的机器学习算法都无法检测到NICU中新生儿的面部表情。我们通过使用新生儿面部的USF - MNPAD - I数据集训练最先进的“你只看一次”(YOLO)面部检测模型,提高了在这个复杂场景中的准确性。在运行时,与NICU护士的手动疼痛评分相比,我们训练的YOLO模型在自动分类新生儿疼痛方面平均精度均值(mAP)相差8.6%,ROC曲线下面积(AUC)相差21.2%。鉴于从术后新生儿面部收集真实数据所面临的挑战、时间和精力,在这里我们分享用这些面部表情数据训练我们的YOLO模型的权重。这些权重可以促进准确检测面部表情策略的进一步发展,该策略可与其他感官模式(身体动作、哭泣频率、生命体征)结合使用来预测疼痛发作时间。对疼痛发作时间的可靠预测反过来会创造一个治疗时间窗口,在此期间NICU护士和医护人员可以实施安全有效的策略来减轻这个脆弱患者群体的严重疼痛。