Department of Biomedical Engineering, School of Medicine, Tsinghua University, Beijing, China.
Int J Comput Assist Radiol Surg. 2021 Nov;16(11):1985-1997. doi: 10.1007/s11548-021-02463-5. Epub 2021 Aug 7.
The visualization of remote surgical scenes is the key to realizing the remote operation of surgical robots. However, current non-endoscopic surgical robot systems lack an effective visualization tool to offer sufficient surgical scene information and depth perception.
We propose a novel autostereoscopic surgical visualization system integrating 3D intraoperative scene reconstruction, autostereoscopic 3D display, and augmented reality-based image fusion. The preoperative organ structure and the intraoperative surface point cloud are obtained from medical imaging and the RGB-D camera, respectively, and aligned by an automatic marker-free intraoperative registration algorithm. After registration, preoperative meshes with precalculated illumination and intraoperative textured point cloud are blended in real time. Finally, the fused image is shown on a 3D autostereoscopic display device to achieve depth perception.
A prototype of the autostereoscopic surgical visualization system was built. The system had a horizontal image resolution of 1.31 mm, a vertical image resolution of 0.82 mm, an average rendering rate of 33.1 FPS, an average registration rate of 20.5 FPS, and average registration errors of approximately 3 mm. A telesurgical robot prototype based on 3D autostereoscopic display was built. The quantitative evaluation experiments showed that our system achieved similar operational accuracy (1.79 ± 0.87 mm) as the conventional system (1.95 ± 0.71 mm), while having advantages in terms of completion time (with 34.11% reduction) and path length (with 35.87% reduction). Post-experimental questionnaires indicated that the system was user-friendly for novices and experts.
We propose a 3D surgical visualization system with augmented instruction and depth perception for telesurgery. The qualitative and quantitative evaluation results illustrate the accuracy and efficiency of the proposed system. Therefore, it shows great prospects in robotic surgery and telesurgery.
远程手术场景的可视化是实现手术机器人远程操作的关键。然而,现有的非内窥镜手术机器人系统缺乏有效的可视化工具来提供充足的手术场景信息和深度感知。
我们提出了一种新的立体手术可视化系统,该系统集成了 3D 术中场景重建、立体 3D 显示和基于增强现实的图像融合。术前器官结构和术中表面点云分别从医学成像和 RGB-D 相机获得,并通过自动无标记术中配准算法对齐。配准后,实时融合术前预计算光照的网格和术中纹理化点云。最后,将融合后的图像显示在 3D 立体显示设备上,以实现深度感知。
构建了立体手术可视化系统原型。该系统具有 1.31mm 的水平图像分辨率、0.82mm 的垂直图像分辨率、平均渲染帧率为 33.1FPS、平均配准帧率为 20.5FPS,平均配准误差约为 3mm。还构建了基于 3D 立体显示的远程手术机器人原型。定量评估实验表明,我们的系统在操作精度上与传统系统(1.95±0.71mm)相似,但在完成时间(减少 34.11%)和路径长度(减少 35.87%)方面具有优势。实验后的问卷调查表明,该系统对新手和专家都很友好。
我们提出了一种具有增强指导和深度感知的 3D 手术可视化系统,用于远程手术。定性和定量评估结果表明了该系统的准确性和效率。因此,它在机器人手术和远程手术中具有广阔的应用前景。