Applied Mathematics Department at the Technion, Israel Institute of Technology, 3200003, Haifa, Israel.
Mayo Clinic, Surgery, Rochester, MN, USA.
Int J Comput Assist Radiol Surg. 2022 Mar;17(3):437-448. doi: 10.1007/s11548-022-02559-6. Epub 2022 Feb 1.
The goal of this study was to develop a new reliable open surgery suturing simulation system for training medical students in situations where resources are limited or in the domestic setup. Namely, we developed an algorithm for tools and hands localization as well as identifying the interactions between them based on simple webcam video data, calculating motion metrics for assessment of surgical skill.
Twenty-five participants performed multiple suturing tasks using our simulator. The YOLO network was modified to a multi-task network for the purpose of tool localization and tool-hand interaction detection. This was accomplished by splitting the YOLO detection heads so that they supported both tasks with minimal addition to computer run-time. Furthermore, based on the outcome of the system, motion metrics were calculated. These metrics included traditional metrics such as time and path length as well as new metrics assessing the technique participants use for holding the tools.
The dual-task network performance was similar to that of two networks, while computational load was only slightly bigger than one network. In addition, the motion metrics showed significant differences between experts and novices.
While video capture is an essential part of minimal invasive surgery, it is not an integral component of open surgery. Thus, new algorithms, focusing on the unique challenges open surgery videos present, are required. In this study, a dual-task network was developed to solve both a localization task and a hand-tool interaction task. The dual network may be easily expanded to a multi-task network, which may be useful for images with multiple layers and for evaluating the interaction between these different layers.
本研究的目的是开发一种新的可靠的开放式手术缝合模拟系统,用于在资源有限或国内环境下培训医学生。具体而言,我们开发了一种算法,用于基于简单的网络摄像头视频数据定位工具和手部,并识别它们之间的相互作用,从而计算出用于评估手术技能的运动学指标。
25 名参与者使用我们的模拟器进行了多次缝合任务。修改了 YOLO 网络以使其成为多任务网络,用于工具定位和工具手交互检测。这是通过分割 YOLO 检测头来实现的,这样它们就可以在对计算机运行时间的影响最小的情况下支持这两个任务。此外,根据系统的输出,计算了运动学指标。这些指标包括传统的时间和路径长度指标,以及评估参与者用于握持工具的技术的新指标。
双任务网络的性能与两个网络相似,而计算负载仅比一个网络略大。此外,运动学指标在专家和新手之间存在显著差异。
虽然视频捕获是微创手术的重要组成部分,但它不是开放式手术的组成部分。因此,需要针对开放式手术视频提出的独特挑战开发新的算法。在本研究中,开发了一种双任务网络来解决定位任务和手工具交互任务。该双网络可以很容易地扩展到多任务网络,这对于具有多个层的图像以及评估这些不同层之间的相互作用可能很有用。