The Department of Mechanical Engineering, The University of Texas at Austin, Austin, TX, 78712, USA.
The Department of Surgery, UT Southwestern Medical Center, Dallas, TX, 75390, USA.
Int J Comput Assist Radiol Surg. 2022 Apr;17(4):785-794. doi: 10.1007/s11548-022-02568-5. Epub 2022 Feb 12.
Excessive stress experienced by the surgeon can have a negative effect on the surgeon's technical skills. The goal of this study is to evaluate and validate a deep learning framework for real-time detection of stressed surgical movements using kinematic data.
30 medical students were recruited as the subjects to perform a modified peg transfer task and were randomized into two groups, a control group (n=15) and a stressed group (n=15) that completed the task under deteriorating, simulated stressful conditions. To classify stressed movements, we first developed an attention-based Long-Short-Term-Memory recurrent neural network (LSTM) trained to classify normal/stressed trials and obtain the contribution of each data frame to the stress level classification. Next, we extracted the important frames from each trial and used another LSTM network to implement the frame-wise classification of normal and stressed movements.
The classification between normal and stressed trials using attention-based LSTM model reached an overall accuracy of 75.86% under Leave-One-User-Out (LOUO) cross-validation. The second LSTM classifier was able to distinguish between the typical normal and stressed movement with an accuracy of 74.96% with an 8-second observation under LOUO. Finally, the normal and stressed movements in stressed trials could be classified with the accuracy of 68.18% with a 16-second observation under LOUO.
In this study, we extracted the movements which are more likely to be affected by stress and validated the feasibility of using LSTM and kinematic data for frame-wise detection of stress level during laparoscopic training. The proposed classifier could be potentially be integrated with robot-assisted surgery platforms for stress management purposes.
外科医生承受的过度压力会对其技术技能产生负面影响。本研究的目的是评估和验证一种使用运动学数据实时检测手术应激运动的深度学习框架。
招募 30 名医学生作为受试者进行改良的 peg 转移任务,并将他们随机分为对照组(n=15)和应激组(n=15),应激组在恶化的模拟应激条件下完成任务。为了对应激运动进行分类,我们首先开发了一个基于注意力的长短期记忆递归神经网络(LSTM),用于对正常/应激试验进行分类,并获得每个数据帧对应激水平分类的贡献。接下来,我们从每个试验中提取重要的帧,并使用另一个 LSTM 网络来实现正常和应激运动的逐帧分类。
基于注意力的 LSTM 模型在Leave-One-User-Out(LOUO)交叉验证下对正常和应激试验的分类达到了 75.86%的总体准确性。第二个 LSTM 分类器能够以 74.96%的准确率区分典型的正常和应激运动,观察时间为 8 秒。最后,在 LOUO 下,观察时间为 16 秒时,应激试验中的正常和应激运动可以以 68.18%的准确率进行分类。
在这项研究中,我们提取了更有可能受到压力影响的运动,并验证了使用 LSTM 和运动学数据对腹腔镜训练中应激水平进行逐帧检测的可行性。所提出的分类器可以潜在地与机器人辅助手术平台集成,用于应激管理目的。