Suppr超能文献

用于增强现实引导机器人部分肾切除术期间自动3D虚拟图像重叠的计算机视觉和机器学习技术

Computer Vision and Machine-Learning Techniques for Automatic 3D Virtual Images Overlapping During Augmented Reality Guided Robotic Partial Nephrectomy.

作者信息

Amparore Daniele, Sica Michele, Verri Paolo, Piramide Federico, Checcucci Enrico, De Cillis Sabrina, Piana Alberto, Campobasso Davide, Burgio Mariano, Cisero Edoardo, Busacca Giovanni, Di Dio Michele, Piazzolla Pietro, Fiori Cristian, Porpiglia Francesco

机构信息

Division of Urology, Dept. of Oncology, School of Medicine, University of Turin, San Luigi Hospital, Orbassano (Turin), Italy.

Department of Surgery, Candiolo Cancer Institute FPO-IRCCS, Candiolo, Italy.

出版信息

Technol Cancer Res Treat. 2024 Jan-Dec;23:15330338241229368. doi: 10.1177/15330338241229368.

Abstract

OBJECTIVES

The research's purpose is to develop a software that automatically integrates and overlay 3D virtual models of kidneys harboring renal masses into the Da Vinci robotic console, assisting surgeon during the intervention.

INTRODUCTION

Precision medicine, especially in the field of minimally-invasive partial nephrectomy, aims to use 3D virtual models as a guidance for augmented reality robotic procedures. However, the co-registration process of the virtual images over the real operative field is performed manually.

METHODS

In this prospective study, two strategies for the automatic overlapping of the model over the real kidney were explored: the computer vision technology, leveraging the super-enhancement of the kidney allowed by the intraoperative injection of Indocyanine green for superimposition and the convolutional neural network technology, based on the processing of live images from the endoscope, after a training of the software on frames from prerecorded videos of the same surgery. The work-team, comprising a bioengineer, a software-developer and a surgeon, collaborated to create hyper-accuracy 3D models for automatic 3D-AR-guided RAPN. For each patient, demographic and clinical data were collected.

RESULTS

Two groups (group A for the first technology with 12 patients and group B for the second technology with 8 patients) were defined. They showed comparable preoperative and post-operative characteristics. Concerning the first technology the average co-registration time was 7 (3-11) seconds while in the case of the second technology 11 (6-13) seconds. No major intraoperative or postoperative complications were recorded. There were no differences in terms of functional outcomes between the groups at every time-point considered.

CONCLUSION

The first technology allowed a successful anchoring of the 3D model to the kidney, despite minimal manual refinements. The second technology improved kidney automatic detection without relying on indocyanine injection, resulting in better organ boundaries identification during tests. Further studies are needed to confirm this preliminary evidence.

摘要

目的

本研究旨在开发一款软件,该软件能够自动将带有肾肿物的肾脏3D虚拟模型整合并叠加到达芬奇机器人手术控制台中,在手术过程中辅助外科医生。

引言

精准医学,尤其是在微创部分肾切除术领域,旨在将3D虚拟模型用作增强现实机器人手术的指导。然而,虚拟图像在真实手术视野上的配准过程是手动进行的。

方法

在这项前瞻性研究中,探索了两种将模型自动叠加到真实肾脏上的策略:计算机视觉技术,利用术中注射吲哚菁绿实现肾脏的超增强以进行叠加;以及卷积神经网络技术,该技术基于对内窥镜实时图像的处理,前提是软件已在相同手术的预录制视频帧上进行了训练。由一名生物工程师、一名软件开发人员和一名外科医生组成的工作团队合作创建了用于自动3D-AR引导的肾部分切除术的超高精度3D模型。收集了每位患者的人口统计学和临床数据。

结果

定义了两组(第一组技术A组有12名患者,第二组技术B组有8名患者)。他们显示出可比的术前和术后特征。关于第一种技术,平均配准时间为7(3 - 11)秒,而第二种技术的平均配准时间为11(6 - 13)秒。未记录到重大术中或术后并发症。在每个考虑的时间点,两组之间的功能结果没有差异。

结论

尽管进行了最少的手动微调,第一种技术仍成功地将3D模型锚定到了肾脏上。第二种技术在不依赖吲哚菁绿注射的情况下改善了肾脏自动检测,在测试期间能够更好地识别器官边界。需要进一步的研究来证实这一初步证据。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8f75/10878218/f62b07e6a301/10.1177_15330338241229368-fig1.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验