Dept. of Electronics, Information and Bioengineering, Politecnico di Milano, Milano, Italy.
Dept. of Management and Production Engineering, Politecnico di Torino, Torino, Italy.
Comput Methods Programs Biomed. 2020 Jul;191:105505. doi: 10.1016/j.cmpb.2020.105505. Epub 2020 Apr 21.
We present an original approach to the development of augmented reality (AR) real-time solutions for robotic surgery navigation. The surgeon operating the robotic system through a console and a visor experiences reduced awareness of the operatory scene. In order to improve the surgeon's spatial perception during robot-assisted minimally invasive procedures, we provide him/her with a solid automatic software system to position, rotate and scale in real-time the 3D virtual model of a patient's organ aligned over its image captured by the endoscope.
We observed that the surgeon may benefit differently from the 3D augmentation during each stage of the surgical procedure; moreover, each stage may present different visual elements that provide specific challenges and opportunities to exploit for organ detection strategies implementation. Hence we integrate different solutions, each dedicated to a specific stage of the surgical procedure, into a single software system.
We present a formal model that generalizes our approach, describing a system composed of integrated solutions for AR in robot-assisted surgery. Following the proposed framework, and application has been developed which is currently used during in vivo surgery, for extensive testing, by the Urology unity of the San Luigi Hospital, in Orbassano (To), Italy.
The main contribution of this paper is in presenting a modular approach to the tracking problem during in-vivo robotic surgery, whose efficacy from a medical point of view has been assessed in cited works. The segmentation of the whole procedure in a set of stages allows associating the best tracking strategy to each of them, as well as to re-utilize implemented software mechanisms in stages with similar features.
我们提出了一种新颖的方法,用于开发机器人手术导航的增强现实(AR)实时解决方案。通过控制台和面罩操作机器人系统的外科医生对手术场景的感知会降低。为了提高外科医生在机器人辅助微创手术期间的空间感知能力,我们为其提供了一个可靠的自动软件系统,以便实时定位、旋转和缩放患者器官的 3D 虚拟模型,该模型与内窥镜捕获的图像对齐。
我们观察到,外科医生在手术过程的每个阶段可能会从 3D 增强中受益不同;此外,每个阶段可能呈现不同的视觉元素,为器官检测策略实施提供特定的挑战和机会。因此,我们将不同的解决方案集成到一个软件系统中,每个解决方案都专门针对手术过程的特定阶段。
我们提出了一个形式化模型,该模型概括了我们的方法,描述了一个由机器人辅助手术中的 AR 集成解决方案组成的系统。根据所提出的框架,已经开发了一个应用程序,目前正在意大利都灵的 Orbassano 的 San Luigi 医院泌尿科使用,进行广泛的测试。
本文的主要贡献在于提出了一种在活体机器人手术中跟踪问题的模块化方法,其从医学角度的有效性已在引用的作品中进行了评估。将整个过程分段为一组阶段允许将最佳跟踪策略与每个阶段相关联,并且可以在具有相似特征的阶段中重新利用已实现的软件机制。