Shafiei Somayeh B, Shadpour Saeed, Intes Xavier, Rahul Rahul, Toussi Mehdi Seilanian, Shafqat Ambreen
Intelligent Cancer Care Laboratory, Department of Urology, Roswell Park Comprehensive Cancer Center, Buffalo, NY, 14263, USA.
University of Guelph, Guelph, ON, N1G 2W1, Canada.
Surg Endosc. 2023 Nov;37(11):8447-8463. doi: 10.1007/s00464-023-10409-y. Epub 2023 Sep 20.
This study explored the use of electroencephalogram (EEG) and eye gaze features, experience-related features, and machine learning to evaluate performance and learning rates in fundamentals of laparoscopic surgery (FLS) and robotic-assisted surgery (RAS).
EEG and eye-tracking data were collected from 25 participants performing three FLS and 22 participants performing two RAS tasks. Generalized linear mixed models, using L1-penalized estimation, were developed to objectify performance evaluation using EEG and eye gaze features, and linear models were developed to objectify learning rate evaluation using these features and performance scores at the first attempt. Experience metrics were added to evaluate their role in learning robotic surgery. The differences in performance across experience levels were tested using analysis of variance.
EEG and eye gaze features and experience-related features were important for evaluating performance in FLS and RAS tasks with reasonable results. Residents outperformed faculty in FLS peg transfer (p value = 0.04), while faculty and residents both excelled over pre-medical students in the FLS pattern cut (p value = 0.01 and p value < 0.001, respectively). Fellows outperformed pre-medical students in FLS suturing (p value = 0.01). In RAS tasks, both faculty and fellows surpassed pre-medical students (p values for the RAS pattern cut were 0.001 for faculty and 0.003 for fellows, while for RAS tissue dissection, the p value was less than 0.001 for both groups), with residents also showing superior skills in tissue dissection (p value = 0.03).
Findings could be used to develop training interventions for improving surgical skills and have implications for understanding motor learning and designing interventions to enhance learning outcomes.
本研究探讨了利用脑电图(EEG)和眼动注视特征、经验相关特征以及机器学习来评估腹腔镜手术基础(FLS)和机器人辅助手术(RAS)中的操作表现和学习率。
收集了25名进行三项FLS操作的参与者以及22名进行两项RAS任务的参与者的EEG和眼动追踪数据。使用L1惩罚估计的广义线性混合模型,通过EEG和眼动注视特征来客观化操作表现评估,并建立线性模型,利用这些特征和首次尝试时的操作表现得分来客观化学习率评估。加入经验指标以评估其在机器人手术学习中的作用。使用方差分析来检验不同经验水平下操作表现的差异。
EEG和眼动注视特征以及经验相关特征对于评估FLS和RAS任务中的操作表现很重要,结果合理。在FLS套管针转移操作中,住院医师的表现优于教员(p值 = 0.04),而在FLS图案切割操作中,教员和住院医师均优于医学预科学生(p值分别为0.01和p值 < 0.001)。在FLS缝合操作中,研究员的表现优于医学预科学生(p值 = 0.01)。在RAS任务中,教员和研究员均超过医学预科学生(RAS图案切割操作中,教员的p值为0.001,研究员的p值为0.003,而在RAS组织解剖操作中,两组的p值均小于0.001),住院医师在组织解剖方面也表现出卓越技能(p值 = 0.03)。
研究结果可用于制定提高手术技能的培训干预措施,并对理解运动学习以及设计增强学习效果的干预措施具有启示意义。