Cascella Marco, Shariff Mohammed Naveed, Lo Bianco Giuliano, Monaco Federica, Gargano Francesca, Simonini Alessandro, Ponsiglione Alfonso Maria, Piazza Ornella
Anesthesia and Pain Medicine, Department of Medicine, Surgery and Dentistry "scuola Medica Salernitana", University of Salerno, Baronissi, 84081, Italy.
Department of AI&DS, Rajalakshmi Institute of Technology, Chennai, TN, India.
J Pain Res. 2024 Nov 9;17:3681-3696. doi: 10.2147/JPR.S491574. eCollection 2024.
Effective pain management is crucial for patient care, impacting comfort, recovery, and overall well-being. Traditional subjective pain assessment methods can be challenging, particularly in specific patient populations. This research explores an alternative approach using computer vision (CV) to detect pain through facial expressions.
The study implements the YOLOv8 real-time object detection model to analyze facial expressions indicative of pain. Given four pain datasets, a dataset of pain-expressing faces was compiled, and each image was carefully labeled based on the presence of pain-associated Action Units (AUs). The labeling distinguished between two classes: pain and no pain. The pain category included specific AUs (AU4, AU6, AU7, AU9, AU10, and AU43) following the Prkachin and Solomon Pain Intensity (PSPI) scoring method. Images showing these AUs with a PSPI score above 2 were labeled as expressing pain. The manual labeling process utilized an open-source tool, makesense.ai, to ensure precise annotation. The dataset was then split into training and testing subsets, each containing a mix of pain and no-pain images. The YOLOv8 model underwent iterative training over 10 epochs. The model's performance was validated using precision, recall, and mean Average Precision (mAP) metrics, and F1 score.
When considering all classes collectively, our model attained a mAP of 0.893 at a threshold of 0.5. The precision for "pain" and "nopain" detection was 0.868 and 0.919, respectively. F1 scores for the classes "pain", "nopain", and "all classes" reached a peak value of 0.80. Finally, the model was tested on the Delaware dataset and in a real-world scenario.
Despite limitations, this study highlights the promise of using real-time computer vision models for pain detection, with potential applications in clinical settings. Future research will focus on evaluating the model's generalizability across diverse clinical scenarios and its integration into clinical workflows to improve patient care.
有效的疼痛管理对患者护理至关重要,会影响舒适度、康复情况和整体健康状况。传统的主观疼痛评估方法可能具有挑战性,尤其是在特定患者群体中。本研究探索了一种使用计算机视觉(CV)通过面部表情检测疼痛的替代方法。
该研究采用YOLOv8实时目标检测模型来分析表明疼痛的面部表情。给定四个疼痛数据集,编制了一个表达疼痛的面部数据集,并且根据与疼痛相关的动作单元(AU)的存在对每张图像进行了仔细标注。标注区分了两类:疼痛和无疼痛。疼痛类别按照Prkachin和Solomon疼痛强度(PSPI)评分方法包括特定的动作单元(AU4、AU6、AU7、AU9、AU10和AU43)。显示这些动作单元且PSPI评分高于2的图像被标注为表达疼痛。手动标注过程使用了开源工具makesense.ai以确保精确注释。然后将数据集分为训练子集和测试子集,每个子集都包含疼痛和无疼痛图像的混合。YOLOv8模型经过10个轮次的迭代训练。使用精度、召回率、平均精度均值(mAP)指标和F1分数对模型的性能进行了验证。
当综合考虑所有类别时,我们的模型在阈值为0.5时达到了0.893的mAP。“疼痛”和“无疼痛”检测的精度分别为0.868和0.919。“疼痛”、“无疼痛”和“所有类别”的F1分数达到峰值0.80。最后,该模型在特拉华数据集和实际场景中进行了测试。
尽管存在局限性,但本研究突出了使用实时计算机视觉模型进行疼痛检测的前景,在临床环境中具有潜在应用。未来的研究将专注于评估模型在不同临床场景中的通用性及其融入临床工作流程以改善患者护理。