Zuo Ruizhi, Wei Shuwen, Wang Yaning, Huang Ruichen, Rodgers Wayne Wonseok, Yu Jinglun, Hsieh Michael H, Krieger Axel, Kang Jin U
Johns Hopkins University, Department of Electrical and Computer Engineering, Baltimore, Maryland, United States.
Johns Hopkins University, Department of Mechanical Engineering, Baltimore, Maryland, United States.
J Biomed Opt. 2025 Aug;30(8):086003. doi: 10.1117/1.JBO.30.8.086003. Epub 2025 Aug 19.
Conventional fringe projection profilometry (FPP) requires multiple image acquisitions and therefore long acquisition times that make it slow for high-speed dynamic measurements. We propose and demonstrate a deep-learning-based single-shot FPP system utilizing a single endoscope for surgical guidance.
We aim to achieve real-time depth map generation of target tissues with high accuracy for robotic surgical guidance.
We proposed an endoscopic single-shot FPP system based on a deep learning network to generate real-time accurate tissue depth maps for surgical guidance. The system utilizes a dual-channel endoscope, where one channel projects fringe patterns from a projector and the other channel collects images using a camera. In addition, we developed a data synthesis method to generate a large number of diverse training datasets. The network consists of MaskNet, which segments the tissue from the background, and DepthNet, which predicts the depth map of the image. The results from both networks are combined to generate the final depth map.
We tested our algorithm using fringe patterns with different frequencies and found that the optimal frequency for single-shot FPP in our setup is 20 Hz. The algorithm has been tested on both synthetic and experimental data, achieving a maximum depth prediction error of and a processing time of about 12.75 ms per frame.
A deep-learning-based single-shot FPP endoscopic system was shown to be highly effective in real-time depth map generation with millimeter-scale error. Implementing such a system has the potential to improve the reliability of image-guided robotic surgery.
传统条纹投影轮廓术(FPP)需要多次图像采集,因此采集时间长,这使得它在高速动态测量中速度较慢。我们提出并演示了一种基于深度学习的单镜头FPP系统,该系统利用单个内窥镜进行手术引导。
我们旨在为机器人手术引导实现对目标组织的高精度实时深度图生成。
我们提出了一种基于深度学习网络的内窥镜单镜头FPP系统,以生成用于手术引导的实时准确组织深度图。该系统使用双通道内窥镜,其中一个通道从投影仪投射条纹图案,另一个通道使用相机采集图像。此外,我们开发了一种数据合成方法来生成大量多样的训练数据集。该网络由从背景中分割组织的MaskNet和预测图像深度图的DepthNet组成。两个网络的结果相结合以生成最终的深度图。
我们使用不同频率的条纹图案测试了我们的算法,发现我们设置中单次FPP的最佳频率是20Hz。该算法已在合成数据和实验数据上进行了测试,实现了最大深度预测误差为 ,每帧处理时间约为12.75毫秒。
基于深度学习的单次FPP内窥镜系统在以毫米级误差进行实时深度图生成方面显示出高度有效性。实施这样的系统有可能提高图像引导机器人手术的可靠性。