College of Mechanical Engineering, Guangxi University, Nanning 530004, China.
State Key Laboratory for Conservation and Utilization of Subtropical Agro-Bioresources, Nanning 530004, China.
Sensors (Basel). 2023 May 5;23(9):4496. doi: 10.3390/s23094496.
Surgical skill assessment can quantify the quality of the surgical operation via the motion state of the surgical instrument tip (SIT), which is considered one of the effective primary means by which to improve the accuracy of surgical operation. Traditional methods have displayed promising results in skill assessment. However, this success is predicated on the SIT sensors, making these approaches impractical when employing the minimally invasive surgical robot with such a tiny end size. To address the assessment issue regarding the operation quality of robot-assisted minimally invasive surgery (RAMIS), this paper proposes a new automatic framework for assessing surgical skills based on visual motion tracking and deep learning. The new method innovatively combines vision and kinematics. The kernel correlation filter (KCF) is introduced in order to obtain the key motion signals of the SIT and classify them by using the residual neural network (ResNet), realizing automated skill assessment in RAMIS. To verify its effectiveness and accuracy, the proposed method is applied to the public minimally invasive surgical robot dataset, the JIGSAWS. The results show that the method based on visual motion tracking technology and a deep neural network model can effectively and accurately assess the skill of robot-assisted surgery in near real-time. In a fairly short computational processing time of 3 to 5 s, the average accuracy of the assessment method is 92.04% and 84.80% in distinguishing two and three skill levels. This study makes an important contribution to the safe and high-quality development of RAMIS.
手术技能评估可以通过手术器械尖端(SIT)的运动状态来量化手术质量,这被认为是提高手术准确性的有效主要手段之一。传统方法在技能评估方面显示出了有前景的结果。然而,这种成功是基于 SIT 传感器的,因此当使用具有如此小末端尺寸的微创机器人时,这些方法是不切实际的。为了解决机器人辅助微创手术(RAMIS)操作质量的评估问题,本文提出了一种基于视觉运动跟踪和深度学习的新的自动手术技能评估框架。新方法创新性地结合了视觉和运动学。引入核相关滤波器(KCF)以获取 SIT 的关键运动信号,并使用残差神经网络(ResNet)对其进行分类,从而实现 RAMIS 中的自动化技能评估。为了验证其有效性和准确性,将该方法应用于公共微创机器人数据集 JIGSAWS。结果表明,基于视觉运动跟踪技术和深度神经网络模型的方法可以有效地、准确地实时评估机器人辅助手术的技能。在相当短的 3 到 5 秒计算处理时间内,评估方法在区分两个和三个技能水平的平均准确率分别为 92.04%和 84.80%。这项研究为 RAMIS 的安全和高质量发展做出了重要贡献。