IEEE Trans Biomed Eng. 2022 Apr;69(4):1378-1385. doi: 10.1109/TBME.2021.3116514. Epub 2022 Mar 18.
Optical coherence tomography (OCT) is an established medical imaging modality that has found widespread use due to its ability to visualize tissue structures at a high resolution. Currently, OCT hand-held imaging probes lack positional information, making it difficult or even impossible to link a specific image to the location it was originally obtained. In this study, we propose a camera-based localization method to track and record the scanner position in real-time, as well as providing a deep learning-based segmentation method.
We used camera-based visual odometry (VO) and simultaneous mapping and localization (SLAM) to compute and visualize the location of a hand-held OCT imaging probe. A deep convolutional neural network (CNN) was used for kidney tubule lumens segmentation.
The mean absolute error (MAE) and the standard deviation (STD) for 1D translation were found to be 0.15 mm and 0.26mm respectively. For 2D translation, the MAE and STD were found to be 0.85 mm and 0.50 mm, respectively. The dice coefficient of the segmentation method was 0.7. The t-statistic of the T-test between predicted and actual average densities and predicted and actual average diameters were 7.7547e-13 and 2.2288e-15 respectively. We also experimented on a preserved kidney utilizing our localization method with automatic segmentation. Comparisons of the average density maps and average diameter maps were made between the 3D comprehensive scan and VO system scan.
Our results demonstrate that VO can track the probe location at high accuracy, and provides a user-friendly visualization tool to review OCT 2D images in 3D space. It also indicates that deep learning can provide high accuracy and high speed for segmentation.
The proposed methods can be potentially used to predict delayed graft function (DGF) in kidney transplantation.
光学相干断层扫描(OCT)是一种成熟的医学成像方式,由于其能够高分辨率地可视化组织结构,因此得到了广泛的应用。目前,OCT 手持式成像探头缺乏位置信息,因此很难甚至不可能将特定图像与其原始获取位置相关联。在本研究中,我们提出了一种基于相机的定位方法,以实时跟踪和记录扫描仪的位置,并提供基于深度学习的分割方法。
我们使用基于相机的视觉里程计(VO)和同时映射和定位(SLAM)来计算和可视化手持式 OCT 成像探头的位置。使用深度卷积神经网络(CNN)进行肾管状腔分割。
1D 平移的平均绝对误差(MAE)和标准差(STD)分别为 0.15mm 和 0.26mm。2D 平移的 MAE 和 STD 分别为 0.85mm 和 0.50mm。分割方法的骰子系数为 0.7。预测平均密度和实际平均直径的 T 检验的 t 统计量分别为 7.7547e-13 和 2.2288e-15。我们还利用我们的定位方法对保存的肾脏进行了自动分割实验。比较了 3D 全面扫描和 VO 系统扫描的平均密度图和平均直径图。
我们的结果表明,VO 可以高精度地跟踪探头位置,并提供了一个用户友好的可视化工具,用于在 3D 空间中查看 OCT 2D 图像。它还表明,深度学习可以为分割提供高精度和高速。
所提出的方法可用于预测肾移植中的延迟移植物功能(DGF)。