Chaisiriprasert Parkpoom, Patchsuwan Nattapat
College of Digital Innovation Technology, Rangsit University, Pathumthani 12000, Thailand.
J Imaging. 2025 May 9;11(5):151. doi: 10.3390/jimaging11050151.
Accurate assessment of pain intensity is critical, particularly for patients who are unable to verbally express their discomfort. This study proposes a novel weighted analytical framework that integrates facial expression analysis through action units (AUs) with a facial feature-based weighting mechanism to enhance the estimation of pain intensity. The proposed method was evaluated on a dataset comprising 4084 facial images from 25 individuals and demonstrated an average accuracy of 92.72% using the weighted pain level estimation model, in contrast to 83.37% achieved using conventional approaches. The observed improvements are primarily attributed to the strategic utilization of AU zones and expression-based weighting, which enable more precise differentiation between pain-related and non-pain-related facial movements. These findings underscore the efficacy of the proposed model in enhancing the accuracy and reliability of automated pain detection, especially in contexts where verbal communication is impaired or absent.
准确评估疼痛强度至关重要,尤其是对于那些无法口头表达不适的患者。本研究提出了一种新颖的加权分析框架,该框架将通过动作单元(AU)进行的面部表情分析与基于面部特征的加权机制相结合,以提高疼痛强度的估计。在一个包含来自25个人的4084张面部图像的数据集上对所提出的方法进行了评估,使用加权疼痛水平估计模型的平均准确率为92.72%,而使用传统方法的准确率为83.37%。观察到的改进主要归因于对AU区域的策略性利用和基于表情的加权,这使得能够更精确地区分与疼痛相关和与非疼痛相关的面部动作。这些发现强调了所提出模型在提高自动疼痛检测的准确性和可靠性方面的有效性,特别是在口头交流受损或不存在的情况下。