• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

运动解耦网络在遮挡下的术中运动估计

Motion Decoupling Network for Intra-Operative Motion Estimation Under Occlusion.

出版信息

IEEE Trans Med Imaging. 2023 Oct;42(10):2924-2935. doi: 10.1109/TMI.2023.3268774. Epub 2023 Oct 2.

DOI:10.1109/TMI.2023.3268774
PMID:37079409
Abstract

In recent intelligent-robot-assisted surgery studies, an urgent issue is how to detect the motion of instruments and soft tissue accurately from intra-operative images. Although optical flow technology from computer vision is a powerful solution to the motion-tracking problem, it has difficulty obtaining the pixel-wise optical flow ground truth of real surgery videos for supervised learning. Thus, unsupervised learning methods are critical. However, current unsupervised methods face the challenge of heavy occlusion in the surgical scene. This paper proposes a novel unsupervised learning framework to estimate the motion from surgical images under occlusion. The framework consists of a Motion Decoupling Network to estimate the tissue and the instrument motion with different constraints. Notably, the network integrates a segmentation subnet that estimates the segmentation map of instruments in an unsupervised manner to obtain the occlusion region and improve the dual motion estimation. Additionally, a hybrid self-supervised strategy with occlusion completion is introduced to recover realistic vision clues. Extensive experiments on two surgical datasets show that the proposed method achieves accurate motion estimation for intra-operative scenes and outperforms other unsupervised methods, with a margin of 15% in accuracy. The average estimation error for tissue is less than 2.2 pixels on average for both surgical datasets.

摘要

在最近的智能机器人辅助手术研究中,一个紧迫的问题是如何从手术图像中准确地检测器械和软组织的运动。尽管计算机视觉中的光流技术是解决运动跟踪问题的强大解决方案,但它难以获得用于监督学习的真实手术视频的逐像素光流真实值。因此,无监督学习方法至关重要。然而,目前的无监督方法面临手术场景中严重遮挡的挑战。本文提出了一种新的无监督学习框架,以在遮挡下从手术图像中估计运动。该框架由一个运动解耦网络组成,该网络使用不同的约束来估计组织和器械的运动。值得注意的是,该网络集成了一个分割子网,以无监督的方式估计器械的分割图,以获得遮挡区域并改进双运动估计。此外,还引入了一种具有遮挡完成的混合自监督策略,以恢复逼真的视觉线索。在两个手术数据集上的广泛实验表明,所提出的方法可以准确地估计手术场景中的运动,并且优于其他无监督方法,在准确性方面有 15%的优势。对于两个手术数据集,组织的平均估计误差均小于 2.2 个像素。

相似文献

1
Motion Decoupling Network for Intra-Operative Motion Estimation Under Occlusion.运动解耦网络在遮挡下的术中运动估计
IEEE Trans Med Imaging. 2023 Oct;42(10):2924-2935. doi: 10.1109/TMI.2023.3268774. Epub 2023 Oct 2.
2
Spatio-temporal layers based intra-operative stereo depth estimation network via hierarchical prediction and progressive training.基于时空层的术中立体深度估计网络,通过分层预测和渐进式训练。
Comput Methods Programs Biomed. 2024 Feb;244:107937. doi: 10.1016/j.cmpb.2023.107937. Epub 2023 Nov 22.
3
Long Term Safety Area Tracking (LT-SAT) with online failure detection and recovery for robotic minimally invasive surgery.用于机器人微创手术的长期安全区域跟踪(LT-SAT),具有在线故障检测和恢复功能。
Med Image Anal. 2018 Apr;45:13-23. doi: 10.1016/j.media.2017.12.010. Epub 2017 Dec 22.
4
Learned optical flow for intra-operative tracking of the retinal fundus.基于深度学习的术中眼底视网膜跟踪的光流方法
Int J Comput Assist Radiol Surg. 2020 May;15(5):827-836. doi: 10.1007/s11548-020-02160-9. Epub 2020 Apr 22.
5
FUN-SIS: A Fully UNsupervised approach for Surgical Instrument Segmentation.FUN-SIS:一种用于手术器械分割的完全无监督方法。
Med Image Anal. 2023 Apr;85:102751. doi: 10.1016/j.media.2023.102751. Epub 2023 Jan 20.
6
A New Parallel Intelligence Based Light Field Dataset for Depth Refinement and Scene Flow Estimation.基于新型平行智能的用于深度细化和场景流估计的光场数据集。
Sensors (Basel). 2022 Dec 4;22(23):9483. doi: 10.3390/s22239483.
7
Combined 2D and 3D tracking of surgical instruments for minimally invasive and robotic-assisted surgery.用于微创和机器人辅助手术的手术器械的二维和三维联合跟踪
Int J Comput Assist Radiol Surg. 2016 Jun;11(6):1109-19. doi: 10.1007/s11548-016-1393-4. Epub 2016 Apr 2.
8
Patch-based adaptive weighting with segmentation and scale (PAWSS) for visual tracking in surgical video.用于手术视频视觉跟踪的基于补丁的带分割和尺度的自适应加权(PAWSS)
Med Image Anal. 2019 Oct;57:120-135. doi: 10.1016/j.media.2019.07.002. Epub 2019 Jul 4.
9
Unsupervised Learning of Monocular Depth and Ego-Motion with Optical Flow Features and Multiple Constraints.基于光流特征和多种约束的单目深度和自身运动的无监督学习。
Sensors (Basel). 2022 Feb 11;22(4):1383. doi: 10.3390/s22041383.
10
Soft tissue tracking for minimally invasive surgery: learning local deformation online.用于微创手术的软组织跟踪:在线学习局部变形
Med Image Comput Comput Assist Interv. 2008;11(Pt 2):364-72. doi: 10.1007/978-3-540-85990-1_44.