Suppr超能文献

用于机器人辅助根治性前列腺切除术的无标记、实时、增强现实引导系统的临床前评估。

Preclinical evaluation of a markerless, real-time, augmented reality guidance system for robot-assisted radical prostatectomy.

作者信息

Kalia Megha, Avinash Apeksha, Navab Nassir, Salcudean Septimiu

机构信息

Electrical and Computer Engineering, University of British Columbia, 2329 West Mall, Vancouver, BC, V6T 1Z4, Canada.

Computer Aided Medical Procedures, Technical University of Munich Boltzmannstraße 15, 85748, Garching bei Múnchen, Germany.

出版信息

Int J Comput Assist Radiol Surg. 2021 Jul;16(7):1181-1188. doi: 10.1007/s11548-021-02419-9. Epub 2021 Jun 2.

Abstract

PURPOSE

Intra-operative augmented reality (AR) during surgery can mitigate incomplete cancer removal by overlaying the anatomical boundaries extracted from medical imaging data onto the camera image. In this paper, we present the first such completely markerless AR guidance system for robot-assisted laparoscopic radical prostatectomy (RALRP) that transforms medical data from transrectal ultrasound (TRUS) to endoscope camera image. Moreover, we reduce the total number of transformations by combining the hand-eye and camera calibrations in a single step.

METHODS

Our proposed solution requires two transformations: TRUS to robot, [Formula: see text], and camera projection matrix, [Formula: see text] (i.e., the transformation from endoscope to camera image frame). [Formula: see text] is estimated by the method proposed in Mohareri et al. (in J Urol 193(1):302-312, 2015). [Formula: see text] is estimated by selecting corresponding 3D-2D data points in the endoscope and the image coordinate frame, respectively, by using a CAD model of the surgical instrument and a preoperative camera intrinsic matrix with an assumption of a projective camera. The parameters are estimated using Levenberg-Marquardt algorithm. Overall mean re-projection errors (MRE) are reported using simulated and real data using a water bath. We show that [Formula: see text] can be re-estimated if the focus is changed during surgery.

RESULTS

Using simulated data, we received an overall MRE in the range of 11.69-13.32 pixels for monoscopic and stereo left and right cameras. For the water bath experiment, the overall MRE is in the range of 26.04-30.59 pixels for monoscopic and stereo cameras. The overall system error from TRUS to camera world frame is 4.05 mm. Details of the procedure are given in supplementary material.

CONCLUSION

We demonstrate a markerless AR guidance system for RALRP that does not need calibration markers and thus has the capability to re-estimate the camera projection matrix if it changes during surgery, e.g., due to a focus change.

摘要

目的

手术过程中的术中增强现实(AR)可通过将从医学影像数据中提取的解剖边界叠加到相机图像上,减少癌症切除不完全的情况。在本文中,我们展示了首个用于机器人辅助腹腔镜根治性前列腺切除术(RALRP)的完全无标记AR引导系统,该系统可将经直肠超声(TRUS)的医学数据转换为内窥镜相机图像。此外,我们通过一步结合手眼校准和相机校准,减少了变换的总数。

方法

我们提出的解决方案需要两次变换:TRUS到机器人,[公式:见原文],以及相机投影矩阵,[公式:见原文](即从内窥镜到相机图像帧的变换)。[公式:见原文]通过Mohareri等人(《泌尿外科杂志》193(1):302 - 312, 2015)提出的方法进行估计。[公式:见原文]通过分别在内窥镜和图像坐标框架中选择相应的3D - 2D数据点来估计,使用手术器械的CAD模型和术前相机固有矩阵,并假设为投影相机。使用Levenberg - Marquardt算法估计参数。使用水浴的模拟数据和真实数据报告总体平均重投影误差(MRE)。我们表明,如果手术过程中焦点发生变化,可以重新估计[公式:见原文]。

结果

使用模拟数据,对于单目和立体左右相机,我们得到的总体MRE在11.69 - 13.32像素范围内。对于水浴实验,单目和立体相机的总体MRE在26.04 - 30.59像素范围内。从TRUS到相机世界帧的总体系统误差为4.05毫米。程序细节见补充材料。

结论

我们展示了一种用于RALRP的无标记AR引导系统,该系统不需要校准标记,因此如果在手术过程中发生变化,例如由于焦点变化,具有重新估计相机投影矩阵的能力。

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验