• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

基于无监督光流和 Kanade-Lucas-Tomasi 跟踪的立体内窥镜的内窥镜定位和密集手术场景重建。

Endoscope Localization and Dense Surgical Scene Reconstruction for Stereo Endoscopy by Unsupervised Optical Flow and Kanade-Lucas-Tomasi Tracking.

出版信息

Annu Int Conf IEEE Eng Med Biol Soc. 2022 Jul;2022:4839-4842. doi: 10.1109/EMBC48229.2022.9871588.

DOI:10.1109/EMBC48229.2022.9871588
PMID:36086106
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC10153602/
Abstract

In image-guided surgery, endoscope tracking and surgical scene reconstruction are critical, yet equally challenging tasks. We present a hybrid visual odometry and reconstruction framework for stereo endoscopy that leverages unsupervised learning-based and traditional optical flow methods to enable concurrent endoscope tracking and dense scene reconstruction. More specifically, to reconstruct texture-less tissue surfaces, we use an unsupervised learning-based optical flow method to estimate dense depth maps from stereo images. Robust 3D landmarks are selected from the dense depth maps and tracked via the Kanade-Lucas-Tomasi tracking algorithm. The hybrid visual odometry also benefits from traditional visual odometry modules, such as keyframe insertion and local bundle adjustment. We evaluate the proposed framework on endoscopic video sequences openly available via the SCARED dataset against both ground truth data, as well as two other state-of-the-art methods - ORB-SLAM2 and Endo-depth. Our proposed method achieved comparable results in terms of both RMS Absolute Trajectory Error and Cloud-to-Mesh RMS Error, suggesting its potential to enable accurate endoscope tracking and scene reconstruction.

摘要

在图像引导手术中,内窥镜跟踪和手术场景重建是至关重要的,但同样具有挑战性的任务。我们提出了一种用于立体内窥镜的混合视觉里程计和重建框架,该框架利用基于无监督学习的和传统的光流方法来实现内窥镜的跟踪和密集场景的重建。更具体地说,为了重建无纹理的组织表面,我们使用基于无监督学习的光流方法从立体图像中估计密集的深度图。从密集深度图中选择鲁棒的 3D 地标,并通过 Kanade-Lucas-Tomasi 跟踪算法进行跟踪。混合视觉里程计还受益于传统的视觉里程计模块,例如关键帧插入和局部束调整。我们在 SCARED 数据集上的内窥镜视频序列上评估了所提出的框架,该数据集既可以与地面实况数据进行比较,也可以与其他两种最先进的方法(ORB-SLAM2 和 Endo-depth)进行比较。我们提出的方法在 RMS 绝对轨迹误差和云到网格 RMS 误差方面都取得了相当的结果,这表明它有可能实现准确的内窥镜跟踪和场景重建。

相似文献

1
Endoscope Localization and Dense Surgical Scene Reconstruction for Stereo Endoscopy by Unsupervised Optical Flow and Kanade-Lucas-Tomasi Tracking.基于无监督光流和 Kanade-Lucas-Tomasi 跟踪的立体内窥镜的内窥镜定位和密集手术场景重建。
Annu Int Conf IEEE Eng Med Biol Soc. 2022 Jul;2022:4839-4842. doi: 10.1109/EMBC48229.2022.9871588.
2
SLAM-based dense surface reconstruction in monocular Minimally Invasive Surgery and its application to Augmented Reality.基于 SLAM 的单目微创手术中密集表面重建及其在增强现实中的应用。
Comput Methods Programs Biomed. 2018 May;158:135-146. doi: 10.1016/j.cmpb.2018.02.006. Epub 2018 Feb 8.
3
Stereo Dense Scene Reconstruction and Accurate Localization for Learning-Based Navigation of Laparoscope in Minimally Invasive Surgery.用于微创手术中基于学习的腹腔镜导航的立体密集场景重建与精确定位
IEEE Trans Biomed Eng. 2023 Feb;70(2):488-500. doi: 10.1109/TBME.2022.3195027. Epub 2023 Jan 19.
4
Dense Depth Estimation from Stereo Endoscopy Videos Using Unsupervised Optical Flow Methods.使用无监督光流方法从立体内窥镜视频中进行密集深度估计
Med Image Underst Anal. 2021 Jul;12722:337-349. doi: 10.1007/978-3-030-80432-9_26. Epub 2021 Jul 6.
5
EndoSLAM dataset and an unsupervised monocular visual odometry and depth estimation approach for endoscopic videos.内镜 SLAM 数据集和一种用于内镜视频的无监督单目视觉里程计和深度估计方法。
Med Image Anal. 2021 Jul;71:102058. doi: 10.1016/j.media.2021.102058. Epub 2021 Apr 15.
6
Learning how to robustly estimate camera pose in endoscopic videos.学习如何在内窥镜视频中稳健地估计相机姿态。
Int J Comput Assist Radiol Surg. 2023 Jul;18(7):1185-1192. doi: 10.1007/s11548-023-02919-w. Epub 2023 May 15.
7
Reconstruction of a 3D surface from video that is robust to missing data and outliers: application to minimally invasive surgery using stereo and mono endoscopes.从视频中重建稳健的 3D 表面,即使存在数据缺失和异常值:在使用立体和单目内窥镜的微创手术中的应用。
Med Image Anal. 2012 Apr;16(3):597-611. doi: 10.1016/j.media.2010.11.002. Epub 2010 Dec 10.
8
Robust feature tracking on the beating heart for a robotic-guided endoscope.在跳动心脏上进行稳健的特征跟踪,以实现机器人引导的内窥镜。
Int J Med Robot. 2011 Dec;7(4):459-68. doi: 10.1002/rcs.418. Epub 2011 Oct 7.
9
Image partitioning and illumination in image-based pose detection for teleoperated flexible endoscopes.基于图像的远程操作柔性内窥镜位姿检测中的图像分区和光照。
Artif Intell Med. 2013 Nov;59(3):185-96. doi: 10.1016/j.artmed.2013.09.002. Epub 2013 Oct 10.
10
BDIS-SLAM: a lightweight CPU-based dense stereo SLAM for surgery.BDIS-SLAM:一种基于 CPU 的轻量级稠密立体手术 SLAM。
Int J Comput Assist Radiol Surg. 2024 May;19(5):811-820. doi: 10.1007/s11548-023-03055-1. Epub 2024 Jan 19.

引用本文的文献

1
Disparity refinement framework for learning-based stereo matching methods in cross-domain setting for laparoscopic images.用于腹腔镜图像跨域设置中基于学习的立体匹配方法的视差细化框架。
J Med Imaging (Bellingham). 2023 Jul;10(4):045001. doi: 10.1117/1.JMI.10.4.045001. Epub 2023 Jul 14.

本文引用的文献

1
Dense Depth Estimation from Stereo Endoscopy Videos Using Unsupervised Optical Flow Methods.使用无监督光流方法从立体内窥镜视频中进行密集深度估计
Med Image Underst Anal. 2021 Jul;12722:337-349. doi: 10.1007/978-3-030-80432-9_26. Epub 2021 Jul 6.
2
RNNSLAM: Reconstructing the 3D colon to visualize missing regions during a colonoscopy.RNNSLAM:在结肠镜检查期间重建 3D 结肠以可视化缺失区域。
Med Image Anal. 2021 Aug;72:102100. doi: 10.1016/j.media.2021.102100. Epub 2021 May 19.
3
EndoSLAM dataset and an unsupervised monocular visual odometry and depth estimation approach for endoscopic videos.
内镜 SLAM 数据集和一种用于内镜视频的无监督单目视觉里程计和深度估计方法。
Med Image Anal. 2021 Jul;71:102058. doi: 10.1016/j.media.2021.102058. Epub 2021 Apr 15.
4
Optical and Electromagnetic Tracking Systems for Biomedical Applications: A Critical Review on Potentialities and Limitations.光学和电磁跟踪系统在生物医学中的应用:潜在性和局限性的批判性回顾。
IEEE Rev Biomed Eng. 2020;13:212-232. doi: 10.1109/RBME.2019.2939091. Epub 2019 Sep 2.