School of Engineering, The University of Tokyo, 7-3-1 Hongo, Tokyo, 113-8656, Japan.
Int J Comput Assist Radiol Surg. 2023 Jun;18(6):1043-1051. doi: 10.1007/s11548-023-02889-z. Epub 2023 Apr 17.
Tissue deformation recovery is to reconstruct the change in shape and surface strain caused by tool-tissue interaction or respiration, which is essential for providing motion and shape information that benefits the improvement of the safety of minimally invasive surgery. The binocular vision-based approach is a practical candidate for deformation recovery as no extra devices are required. However, previous methods suffer from limitations such as the reliance on biomechanical priors and the vulnerability to the occlusion caused by surgical instruments. To address the issues, we propose a deformation recovery method incorporating mesh structures and scene flow.
The method can be divided into three modules. The first one is the implementation of the two-step scene flow generation module to extract the 3D motion from the binocular sequence. Second, we propose a strain-based filtering method to denoise the original scene flow. Third, a mesh optimization model is proposed that strengthens the robustness to occlusion by employing contextual connectivity.
In a phantom and an in vivo experiment, the feasibility of the method in recovering surface deformation in the presence of tool-induced occlusion was demonstrated. Surface reconstruction accuracy was quantitatively evaluated by comparing the recovered mesh surface with the 3D scanned model in the phantom experiment. Results show that the overall error is 0.70 ± 0.55 mm.
The method has been demonstrated to be capable of continuously recovering surface deformation using mesh representation with robustness to the occlusion caused by surgical forceps and promises to be suitable for the application in actual surgery.
组织变形恢复是为了重建由于工具-组织相互作用或呼吸引起的形状和表面应变的变化,这对于提供运动和形状信息是至关重要的,这有助于提高微创手术的安全性。基于双目视觉的方法是变形恢复的一种实用候选方法,因为不需要额外的设备。然而,以前的方法存在一些局限性,例如依赖于生物力学先验知识,以及容易受到手术器械遮挡的影响。为了解决这些问题,我们提出了一种结合网格结构和场景流的变形恢复方法。
该方法可以分为三个模块。第一个模块是两步场景流生成模块的实现,用于从双目序列中提取 3D 运动。其次,我们提出了一种基于应变的滤波方法来对原始场景流进行去噪。第三,提出了一种网格优化模型,通过利用上下文连接来增强对遮挡的鲁棒性。
在一个体模和一个体内实验中,演示了该方法在存在工具诱导遮挡的情况下恢复表面变形的可行性。通过在体模实验中将恢复的网格表面与 3D 扫描模型进行比较,对表面重建精度进行了定量评估。结果表明,整体误差为 0.70±0.55mm。
该方法已被证明能够使用网格表示连续恢复表面变形,具有对手术器械引起的遮挡的鲁棒性,有望适用于实际手术中的应用。