Innovation Center Computer Assisted Surgery, University of Leipzig, Leipzig, Germany.
Department of Cardiothoracic and Vascular Surgery, German Heart Center Berlin, Berlin, Germany.
Int J Comput Assist Radiol Surg. 2022 Sep;17(9):1619-1631. doi: 10.1007/s11548-022-02588-1. Epub 2022 Mar 16.
For an in-depth analysis of the learning benefits that a stereoscopic view presents during endoscopic training, surgeons required a custom surgical evaluation system enabling simulator independent evaluation of endoscopic skills. Automated surgical skill assessment is in dire need since supervised training sessions and video analysis of recorded endoscope data are very time-consuming. This paper presents a first step towards a multimodal training evaluation system, which is not restricted to certain training setups and fixed evaluation metrics.
With our system we performed data fusion of motion and muscle-action measurements during multiple endoscopic exercises. The exercises were performed by medical experts with different surgical skill levels, using either two or three-dimensional endoscopic imaging. Based on the multi-modal measurements, training features were calculated and their significance assessed by distance and variance analysis. Finally, the features were used automatic classification of the used endoscope modes.
During the study, 324 datasets from 12 participating volunteers were recorded, consisting of spatial information from the participants' joint and right forearm electromyographic information. Feature significance analysis showed distinctive significance differences, with amplitude-related muscle information and velocity information from hand and wrist being among the most significant ones. The analyzed and generated classification models exceeded a correct prediction rate of used endoscope type accuracy rate of 90%.
The results support the validity of our setup and feature calculation, while their analysis shows significant distinctions and can be used to identify the used endoscopic view mode, something not apparent when analyzing time tables of each exercise attempt. The presented work is therefore a first step toward future developments, with which multivariate feature vectors can be classified automatically in real-time to evaluate endoscopic training and track learning progress.
为了深入分析立体视图在内窥镜培训中带来的学习益处,外科医生需要一个定制的手术评估系统,以便在模拟器之外评估内窥镜技能。由于监督培训课程和记录内窥镜数据的视频分析非常耗时,因此自动化手术技能评估是非常必要的。本文提出了迈向多模态培训评估系统的第一步,该系统不受特定培训设置和固定评估指标的限制。
我们在多个内窥镜练习中进行了运动和肌肉动作测量的数据融合。练习由具有不同手术技能水平的医学专家进行,使用二维或三维内窥镜成像。基于多模态测量,计算了培训特征,并通过距离和方差分析评估其显著性。最后,使用这些特征自动分类使用的内窥镜模式。
在研究过程中,记录了来自 12 名参与志愿者的 324 个数据集,包括参与者关节的空间信息和右前臂肌电图信息。特征显著性分析显示出显著的差异,其中与幅度相关的肌肉信息和手和手腕的速度信息是最显著的信息之一。分析和生成的分类模型的正确预测率超过了使用的内窥镜类型准确率的 90%。
结果支持了我们的设置和特征计算的有效性,而它们的分析显示出了显著的区别,可以用于识别使用的内窥镜视图模式,而在分析每个练习尝试的时间表时,这一点并不明显。因此,本文所做的工作是迈向未来发展的第一步,可以实时自动对多元特征向量进行分类,以评估内窥镜培训和跟踪学习进度。