Suppr超能文献

基于视觉里程计的手持 OCT 探头室内定位和基于深度学习的实时分割

Indoor Localization of Hand-Held OCT Probe Using Visual Odometry and Real-Time Segmentation Using Deep Learning.

出版信息

IEEE Trans Biomed Eng. 2022 Apr;69(4):1378-1385. doi: 10.1109/TBME.2021.3116514. Epub 2022 Mar 18.

Abstract

OBJECTIVE

Optical coherence tomography (OCT) is an established medical imaging modality that has found widespread use due to its ability to visualize tissue structures at a high resolution. Currently, OCT hand-held imaging probes lack positional information, making it difficult or even impossible to link a specific image to the location it was originally obtained. In this study, we propose a camera-based localization method to track and record the scanner position in real-time, as well as providing a deep learning-based segmentation method.

METHODS

We used camera-based visual odometry (VO) and simultaneous mapping and localization (SLAM) to compute and visualize the location of a hand-held OCT imaging probe. A deep convolutional neural network (CNN) was used for kidney tubule lumens segmentation.

RESULTS

The mean absolute error (MAE) and the standard deviation (STD) for 1D translation were found to be 0.15 mm and 0.26mm respectively. For 2D translation, the MAE and STD were found to be 0.85 mm and 0.50 mm, respectively. The dice coefficient of the segmentation method was 0.7. The t-statistic of the T-test between predicted and actual average densities and predicted and actual average diameters were 7.7547e-13 and 2.2288e-15 respectively. We also experimented on a preserved kidney utilizing our localization method with automatic segmentation. Comparisons of the average density maps and average diameter maps were made between the 3D comprehensive scan and VO system scan.

CONCLUSION

Our results demonstrate that VO can track the probe location at high accuracy, and provides a user-friendly visualization tool to review OCT 2D images in 3D space. It also indicates that deep learning can provide high accuracy and high speed for segmentation.

SIGNIFICANCE

The proposed methods can be potentially used to predict delayed graft function (DGF) in kidney transplantation.

摘要

目的

光学相干断层扫描(OCT)是一种成熟的医学成像方式,由于其能够高分辨率地可视化组织结构,因此得到了广泛的应用。目前,OCT 手持式成像探头缺乏位置信息,因此很难甚至不可能将特定图像与其原始获取位置相关联。在本研究中,我们提出了一种基于相机的定位方法,以实时跟踪和记录扫描仪的位置,并提供基于深度学习的分割方法。

方法

我们使用基于相机的视觉里程计(VO)和同时映射和定位(SLAM)来计算和可视化手持式 OCT 成像探头的位置。使用深度卷积神经网络(CNN)进行肾管状腔分割。

结果

1D 平移的平均绝对误差(MAE)和标准差(STD)分别为 0.15mm 和 0.26mm。2D 平移的 MAE 和 STD 分别为 0.85mm 和 0.50mm。分割方法的骰子系数为 0.7。预测平均密度和实际平均直径的 T 检验的 t 统计量分别为 7.7547e-13 和 2.2288e-15。我们还利用我们的定位方法对保存的肾脏进行了自动分割实验。比较了 3D 全面扫描和 VO 系统扫描的平均密度图和平均直径图。

结论

我们的结果表明,VO 可以高精度地跟踪探头位置,并提供了一个用户友好的可视化工具,用于在 3D 空间中查看 OCT 2D 图像。它还表明,深度学习可以为分割提供高精度和高速。

意义

所提出的方法可用于预测肾移植中的延迟移植物功能(DGF)。

相似文献

3
A deep learning approach for pose estimation from volumetric OCT data.基于深度学习的体 OCT 数据的姿态估计方法。
Med Image Anal. 2018 May;46:162-179. doi: 10.1016/j.media.2018.03.002. Epub 2018 Mar 10.

本文引用的文献

1
A visual odometry base-tracking system for intraoperative C-arm guidance.术中 C 臂导航的视觉里程计基准跟踪系统。
Int J Comput Assist Radiol Surg. 2020 Oct;15(10):1597-1609. doi: 10.1007/s11548-020-02229-5. Epub 2020 Jul 21.
4
Brain tumor segmentation with Deep Neural Networks.基于深度神经网络的脑肿瘤分割。
Med Image Anal. 2017 Jan;35:18-31. doi: 10.1016/j.media.2016.05.004. Epub 2016 May 19.
5
Fully Convolutional Networks for Semantic Segmentation.全卷积网络用于语义分割。
IEEE Trans Pattern Anal Mach Intell. 2017 Apr;39(4):640-651. doi: 10.1109/TPAMI.2016.2572683. Epub 2016 May 24.
8
MonoSLAM: real-time single camera SLAM.单目即时定位与地图构建(MonoSLAM):实时单目相机即时定位与地图构建
IEEE Trans Pattern Anal Mach Intell. 2007 Jun;29(6):1052-67. doi: 10.1109/TPAMI.2007.1049.
9
Pedestrian Tracking with shoe-mounted inertial sensors.基于鞋载惯性传感器的行人跟踪
IEEE Comput Graph Appl. 2005 Nov-Dec;25(6):38-46. doi: 10.1109/mcg.2005.140.
10
Optical coherence tomography.光学相干断层扫描
Science. 1991 Nov 22;254(5035):1178-81. doi: 10.1126/science.1957169.

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验