Department of Surgery and Cancer, Hamlyn Centre for Robotic Surgery, Imperial College London, London, United Kingdom; Imperial College Healthcare NHS Trust, St. Mary's Praed St., Paddington, London, United Kingdom; Department of Neurosurgery, National Hospital for Neurology and Neurosurgery, London, United Kingdom.
Department of Surgery and Cancer, Hamlyn Centre for Robotic Surgery, Imperial College London, London, United Kingdom.
World Neurosurg. 2021 May;149:e669-e686. doi: 10.1016/j.wneu.2021.01.117. Epub 2021 Feb 12.
BACKGROUND/OBJECTIVE: Technical skill acquisition is an essential component of neurosurgical training. Educational theory suggests that optimal learning and improvement in performance depends on the provision of objective feedback. Therefore, the aim of this study was to develop a vision-based framework based on a novel representation of surgical tool motion and interactions capable of automated and objective assessment of microsurgical skill.
Videos were obtained from 1 expert, 6 intermediate, and 12 novice surgeons performing arachnoid dissection in a validated clinical model using a standard operating microscope. A mask region convolutional neural network framework was used to segment the tools present within the operative field in a recorded video frame. Tool motion analysis was achieved using novel triangulation metrics. Performance of the framework in classifying skill levels was evaluated using the area under the curve and accuracy. Objective measures of classifying the surgeons' skill level were also compared using the Mann-Whitney U test, and a value of P < 0.05 was considered statistically significant.
The area under the curve was 0.977 and the accuracy was 84.21%. A number of differences were found, which included experts having a lower median dissector velocity (P = 0.0004; 190.38 ms vs. 116.38 ms), and a smaller inter-tool tip distance (median 46.78 vs. 75.92; P = 0.0002) compared with novices.
Automated and objective analysis of microsurgery is feasible using a mask region convolutional neural network, and a novel tool motion and interaction representation. This may support technical skills training and assessment in neurosurgery.
背景/目的:技术技能的获取是神经外科培训的重要组成部分。教育理论表明,最佳学习和提高表现取决于提供客观反馈。因此,本研究的目的是开发一种基于手术工具运动和相互作用的新表示形式的基于视觉的框架,该框架能够自动和客观地评估显微手术技能。
使用标准手术显微镜,从 1 名专家、6 名中级和 12 名新手外科医生在经过验证的临床模型中进行蛛网膜解剖获得视频。使用掩模区域卷积神经网络框架对记录视频帧中的手术区域内的工具进行分割。使用新的三角测量指标实现工具运动分析。使用曲线下面积和准确性评估框架在分类技能水平方面的性能。还使用 Mann-Whitney U 检验比较了框架分类外科医生技能水平的客观措施,并且认为 P 值<0.05 具有统计学意义。
曲线下面积为 0.977,准确性为 84.21%。发现了一些差异,包括专家的解剖器速度较低(P=0.0004;190.38 ms 比 116.38 ms),并且工具尖端之间的距离较小(中位数 46.78 比 75.92;P=0.0002)与新手相比。
使用掩模区域卷积神经网络和新的工具运动和相互作用表示,可以对显微手术进行自动和客观的分析。这可能支持神经外科的技术技能培训和评估。