Suppr超能文献

用于微创手术中基于学习的腹腔镜导航的立体密集场景重建与精确定位

Stereo Dense Scene Reconstruction and Accurate Localization for Learning-Based Navigation of Laparoscope in Minimally Invasive Surgery.

作者信息

Wei Ruofeng, Li Bin, Mo Hangjie, Lu Bo, Long Yonghao, Yang Bohan, Dou Qi, Liu Yunhui, Sun Dong

出版信息

IEEE Trans Biomed Eng. 2023 Feb;70(2):488-500. doi: 10.1109/TBME.2022.3195027. Epub 2023 Jan 19.

Abstract

OBJECTIVE

The computation of anatomical information and laparoscope position is a fundamental block of surgical navigation in Minimally Invasive Surgery (MIS). Recovering a dense 3D structure of surgical scene using visual cues remains a challenge, and the online laparoscopic tracking primarily relies on external sensors, which increases system complexity.

METHODS

Here, we propose a learning-driven framework, in which an image-guided laparoscopic localization with 3D reconstructions of anatomical structures is obtained. To reconstruct the structure of the whole surgical environment, we first fine-tune a learning-based stereoscopic depth perception method, which is robust to texture-less and variant soft tissues, for depth estimation. Then, we develop a dense reconstruction algorithm to represent the scene by surfels, estimate the laparoscope poses and fuse the depth into a unified reference coordinate for tissue reconstruction. To estimate poses of new laparoscope views, we achieve a coarse-to-fine localization method, which incorporates our reconstructed model.

RESULTS

We evaluate the reconstruction method and the localization module on three datasets, namely, the stereo correspondence and reconstruction of endoscopic data (SCARED), the ex-vivo data collected with Universal Robot (UR) and Karl Storz Laparoscope, and the in-vivo DaVinci robotic surgery dataset, where the reconstructed structures have rich details of surface texture with an error under 1.71 mm and the localization module can accurately track the laparoscope with images as input.

CONCLUSIONS

Experimental results demonstrate the superior performance of the proposed method in anatomy reconstruction and laparoscopic localization.

SIGNIFICANCE

The proposed framework can be potentially extended to the current surgical navigation system.

摘要

目的

计算解剖信息和腹腔镜位置是微创手术(MIS)中手术导航的基本组成部分。利用视觉线索恢复手术场景的密集三维结构仍然是一个挑战,并且在线腹腔镜跟踪主要依赖外部传感器,这增加了系统的复杂性。

方法

在此,我们提出一个基于学习的框架,通过该框架可获得具有解剖结构三维重建的图像引导腹腔镜定位。为了重建整个手术环境的结构,我们首先对一种基于学习的立体深度感知方法进行微调,该方法对无纹理和变化的软组织具有鲁棒性,用于深度估计。然后,我们开发一种密集重建算法,用表面元表示场景,估计腹腔镜位姿并将深度融合到统一的参考坐标系中以进行组织重建。为了估计新腹腔镜视图的位姿,我们实现了一种粗到精的定位方法,该方法结合了我们重建的模型。

结果

我们在三个数据集上评估了重建方法和定位模块,即内镜数据的立体对应与重建(SCARED)、使用通用机器人(UR)和卡尔·史托斯腹腔镜收集的离体数据,以及体内达芬奇机器人手术数据集,其中重建结构具有丰富的表面纹理细节,误差在1.71毫米以下,并且定位模块可以将图像作为输入准确跟踪腹腔镜。

结论

实验结果证明了所提方法在解剖结构重建和腹腔镜定位方面的卓越性能。

意义

所提框架有可能扩展到当前的手术导航系统。

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验