Suppr超能文献

一种用于通过皮肤电活动(EDA)和心电图(ECG)融合进行多模态疼痛检测的CrossMod-Transformer深度学习框架。

A CrossMod-Transformer deep learning framework for multi-modal pain detection through EDA and ECG fusion.

作者信息

Farmani Jaleh, Bargshady Ghazal, Gkikas Stefanos, Tsiknakis Manolis, Rojas Raul Fernandez

机构信息

University of Rome 'La Sapienza', Department of Computer, Control & Management Engineering, Rome, 00185, Italy.

University of Canberra, Faculty of Science & Technology, Canberra, 2917, Australia.

出版信息

Sci Rep. 2025 Aug 12;15(1):29467. doi: 10.1038/s41598-025-14238-y.

Abstract

Pain is a multifaceted phenomenon that significantly affects a large portion of the global population. Objective pain assessment is essential for developing effective management strategies, which in turn contribute to more efficient and responsive healthcare systems. However, accurately evaluating pain remains a complex challenge due to subtle physiological and behavioural indicators, individual-specific pain responses, and the need for continuous patient monitoring. Automatic pain assessment systems offer promising, technology-driven solutions to support and enhance various aspects of the pain evaluation process. Physiological indicators offer valuable insights into pain-related states and are generally less influenced by individual variability compared to behavioural modalities, such as facial expressions. Skin conductance, regulated by sweat gland activity, and the heart's electrical signals are both influenced by changes in the sympathetic nervous system. Biosignals, such as electrodermal activity (EDA) and electrocardiogram (ECG), can, therefore, objectively capture the body's physiological responses to painful stimuli. This paper proposes a novel multi-modal ensemble deep learning framework that combines electrodermal activity and electrocardiogram signals for automatic pain recognition. The proposed framework includes a uni-modal approach (FCN-ALSTM-Transformer) comprising a Fully Convolutional Network, Attention-based LSTM, and a Transformer block to integrate features extracted by these models. Additionally, a multi-modal approach (CrossMod-Transformer) is introduced, featuring a dedicated Transformer architecture that fuses electrodermal activity and electrocardiogram signals. Experimental evaluations were primarily conducted on the BioVid dataset, with further cross-dataset validation using the AI4PAIN 2025 dataset to assess the generalisability of the proposed method. Notably, the CrossMod-Transformer achieved an accuracy of 87.52% on Biovid and 75.83% on AI4PAIN, demonstrating strong performance across independent datasets and outperforming several state-of-the-art uni-modal and multi-modal methods. These results highlight the potential of the proposed framework to improve the reliability of automatic multi-modal pain recognition and support the development of more objective and inclusive clinical assessment tools.

摘要

疼痛是一种多方面的现象,严重影响着全球很大一部分人口。客观的疼痛评估对于制定有效的管理策略至关重要,而这反过来又有助于建立更高效、反应更迅速的医疗保健系统。然而,由于微妙的生理和行为指标、个体特异性的疼痛反应以及对患者进行持续监测的需求,准确评估疼痛仍然是一项复杂的挑战。自动疼痛评估系统提供了有前景的、技术驱动的解决方案,以支持和加强疼痛评估过程的各个方面。生理指标为与疼痛相关的状态提供了有价值的见解,并且与诸如面部表情等行为方式相比,通常受个体差异的影响较小。由汗腺活动调节的皮肤电导率和心脏的电信号都受到交感神经系统变化的影响。因此,诸如皮肤电活动(EDA)和心电图(ECG)等生物信号可以客观地捕捉身体对疼痛刺激的生理反应。本文提出了一种新颖的多模态集成深度学习框架,该框架结合皮肤电活动和心电图信号进行自动疼痛识别。所提出的框架包括一种单模态方法(FCN-ALSTM-Transformer),该方法由全卷积网络、基于注意力的长短期记忆网络(LSTM)和一个Transformer模块组成,用于整合这些模型提取的特征。此外,还引入了一种多模态方法(CrossMod-Transformer),其具有专门的Transformer架构,可融合皮肤电活动和心电图信号。实验评估主要在BioVid数据集上进行,并使用AI4PAIN 2025数据集进行进一步的跨数据集验证,以评估所提出方法的通用性。值得注意的是,CrossMod-Transformer在Biovid数据集上的准确率达到了87.52%,在AI4PAIN数据集上的准确率为75.83%,在独立数据集上表现出强大的性能,优于几种先进的单模态和多模态方法。这些结果突出了所提出框架在提高自动多模态疼痛识别可靠性方面的潜力,并支持开发更客观、更具包容性的临床评估工具。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ab40/12344049/8939a00461c6/41598_2025_14238_Fig1_HTML.jpg

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验