Suppr超能文献

使用带有门控循环单元(GRU)和长短期记忆网络(LSTM)的二维卷积神经网络(2D-CNN)从视频数据预测对针头的血管迷走神经反应。

Predicting vasovagal reactions to needles from video data using 2D-CNN with GRU and LSTM.

作者信息

Rudokaite Judita, Ong Sharon, Onal Ertugrul Itir, Janssen Mart P, Huis In 't Veld Elisabeth

机构信息

Department of Cognitive Science and Artificial Intelligence, Tilburg University, Tilburg, The Netherlands.

Donor Medicine Research, Sanquin Research, Amsterdam, The Netherlands.

出版信息

PLoS One. 2025 Jan 24;20(1):e0314038. doi: 10.1371/journal.pone.0314038. eCollection 2025.

Abstract

When undergoing or about to undergo a needle-related procedure, most people are not aware of the adverse emotional and physical reactions (so-called vasovagal reactions; VVR), that might occur. Thus, rather than relying on self-report measurements, we investigate whether we can predict VVR levels from the video sequence containing facial information measured during the blood donation. We filmed 287 blood donors throughout the blood donation procedure where we obtained 1945 videos for data analysis. We compared 5 different sequences of videos-45, 30, 20, 10 and 5 seconds to test the shortest video duration required to predict VVR levels. We used 2D-CNN with LSTM and GRU to predict continuous VVR scores and to classify discrete (low and high) VVR values obtained during the blood donation. The results showed that during the classification task, the highest achieved F1 score on high VVR class was 0.74 with a precision of 0.93, recall of 0.61, PR-AUC of 0.86 and an MCC score of 0.61 using a pre-trained ResNet152 model with LSTM on 25 frames and during the regression task the lowest root mean square error achieved was 2.56 using GRU on 50 frames. This study demonstrates that it is possible to predict vasovagal responses during a blood donation using facial features, which supports the further development of interventions to prevent VVR.

摘要

在接受或即将接受与针头相关的操作时,大多数人并未意识到可能会出现的不良情绪和身体反应(即所谓的血管迷走神经反应;VVR)。因此,我们并非依赖自我报告测量,而是研究能否从包含献血过程中所测面部信息的视频序列来预测VVR水平。我们在整个献血过程中拍摄了287名献血者,共获得1945个视频用于数据分析。我们比较了5种不同时长的视频序列——45秒、30秒、20秒、10秒和5秒,以测试预测VVR水平所需的最短视频时长。我们使用带有长短期记忆网络(LSTM)和门控循环单元(GRU)的二维卷积神经网络(2D-CNN)来预测连续的VVR分数,并对献血过程中获得的离散(低和高)VVR值进行分类。结果表明,在分类任务中,使用预训练的ResNet152模型和LSTM在25帧上进行预测时,高VVR类别的最高F1分数为0.74,精确率为0.93,召回率为0.61,PR-AUC为0.86,MCC分数为0.61;在回归任务中,使用GRU在50帧上进行预测时,实现的最低均方根误差为2.56。这项研究表明,利用面部特征预测献血过程中的血管迷走神经反应是可行的,这为预防VVR的干预措施的进一步发展提供了支持。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/37d1/11760633/4b60a5ee2ebc/pone.0314038.g001.jpg

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验