• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

视频辅助手术中的刚性部分混合模型的器械检测和位姿估计。

Instrument detection and pose estimation with rigid part mixtures model in video-assisted surgeries.

机构信息

Multimedia Systems Department, Faculty of Electronics, Telecommunications, and Informatics, Gdansk University of Technology, ul. Narutowicza 11/12, Gdansk 80-233, Poland; Systems Research Institute of the Polish Academy of Sciences, ul. Newelska 6, Warsaw 01-447, Poland.

Systems Research Institute of the Polish Academy of Sciences, ul. Newelska 6, Warsaw 01-447, Poland; Biomedical Engineering Department, Faculty of Electronics, Telecommunications, and Informatics, Gdansk University of Technology, ul. Narutowicza 11/12, Gdansk 80-233, Poland".

出版信息

Med Image Anal. 2018 May;46:244-265. doi: 10.1016/j.media.2018.03.012. Epub 2018 Mar 30.

DOI:10.1016/j.media.2018.03.012
PMID:29631089
Abstract

Localizing instrument parts in video-assisted surgeries is an attractive and open computer vision problem. A working algorithm would immediately find applications in computer-aided interventions in the operating theater. Knowing the location of tool parts could help virtually augment visual faculty of surgeons, assess skills of novice surgeons, and increase autonomy of surgical robots. A surgical tool varies in appearance due to articulation, viewpoint changes, and noise. We introduce a new method for detection and pose estimation of multiple non-rigid and robotic tools in surgical videos. The method uses a rigidly structured, bipartite model of end-effector and shaft parts that consistently encode diverse, pose-specific appearance mixtures of the tool. This rigid part mixtures model then jointly explains the evolving tool structure by switching between mixture components. Rigidly capturing end-effector appearance allows explicit transfer of keypoint meta-data of the detected components for full 2D pose estimation. The detector can as well delineate precise skeleton of the end-effector by transferring additional keypoints. To this end, we propose effective procedure for learning such rigid mixtures from videos and for pooling the modeled shaft part that undergoes frequent truncation at the border of the imaged scene. Notably, extensive diagnostic experiments inform that feature regularization is a key to fine-tune the model in the presence of inherent appearance bias in videos. Experiments further illustrate that estimation of end-effector pose improves upon including the shaft part in the model. We then evaluate our approach on publicly available datasets of in-vivo sequences of non-rigid tools and demonstrate state-of-the-art results.

摘要

在视频辅助手术中定位器械部件是一个具有吸引力和开放性的计算机视觉问题。一个可行的算法将立即在手术室中的计算机辅助干预中找到应用。了解工具部件的位置可以帮助虚拟增强外科医生的视觉能力,评估新手外科医生的技能,并提高手术机器人的自主性。由于关节运动、视角变化和噪声,手术工具的外观会发生变化。我们引入了一种新的方法,用于检测和估计手术视频中多个非刚性和机器人工具的位置和姿态。该方法使用刚性结构的二分模型,对末端执行器和轴部分进行编码,该模型始终对工具的各种特定于姿态的外观混合进行编码。然后,这个刚性部分混合模型通过在混合成分之间切换来共同解释不断变化的工具结构。刚性捕获末端执行器的外观允许为完整的 2D 姿态估计显式传输检测到的组件的关键点元数据。检测器还可以通过传输附加的关键点来描绘末端执行器的精确骨架。为此,我们提出了一种从视频中学习这种刚性混合物并对建模的轴部分进行池化的有效方法,该轴部分在成像场景的边界处经常被截断。值得注意的是,广泛的诊断实验表明,特征正则化是在视频中存在固有外观偏差的情况下微调模型的关键。实验进一步表明,在模型中包括轴部分可以提高末端执行器姿态的估计。然后,我们在公共可用的非刚性工具体内序列数据集上评估我们的方法,并展示了最先进的结果。

相似文献

1
Instrument detection and pose estimation with rigid part mixtures model in video-assisted surgeries.视频辅助手术中的刚性部分混合模型的器械检测和位姿估计。
Med Image Anal. 2018 May;46:244-265. doi: 10.1016/j.media.2018.03.012. Epub 2018 Mar 30.
2
Hand-held robotic instrument for dextrous laparoscopic interventions.用于灵巧腹腔镜干预的手持式机器人器械。
Int J Med Robot. 2008 Dec;4(4):331-8. doi: 10.1002/rcs.214.
3
An integrated approach to endoscopic instrument tracking for augmented reality applications in surgical simulation training.用于手术模拟培训中增强现实应用的内镜器械跟踪的集成方法。
Int J Med Robot. 2013 Dec;9(4):e34-51. doi: 10.1002/rcs.1485. Epub 2013 Jan 25.
4
Tracking-by-detection of surgical instruments in minimally invasive surgery via the convolutional neural network deep learning-based method.基于卷积神经网络深度学习的微创手术中手术器械的跟踪检测。
Comput Assist Surg (Abingdon). 2017 Dec;22(sup1):26-35. doi: 10.1080/24699322.2017.1378777. Epub 2017 Sep 22.
5
Toward detection and localization of instruments in minimally invasive surgery.微创手术中器械的检测和定位。
IEEE Trans Biomed Eng. 2013 Apr;60(4):1050-8. doi: 10.1109/TBME.2012.2229278. Epub 2012 Nov 21.
6
Real-time localization of articulated surgical instruments in retinal microsurgery.在视网膜微创手术中对铰接式手术器械进行实时定位。
Med Image Anal. 2016 Dec;34:82-100. doi: 10.1016/j.media.2016.05.003. Epub 2016 May 13.
7
Video processing to locate the tooltip position in surgical eye-hand coordination tasks.在眼科手术眼手协调任务中用于定位工具提示位置的视频处理
Surg Innov. 2015 Jun;22(3):285-93. doi: 10.1177/1553350614541859. Epub 2014 Jul 21.
8
Endoscopic vision-based tracking of multiple surgical instruments during robot-assisted surgery.机器人辅助手术中基于内镜视觉的多手术器械跟踪。
Artif Organs. 2013 Jan;37(1):107-12. doi: 10.1111/j.1525-1594.2012.01543.x. Epub 2012 Oct 9.
9
Articulated Multi-Instrument 2-D Pose Estimation Using Fully Convolutional Networks.基于全卷积网络的关节式多仪器 2D 位姿估计
IEEE Trans Med Imaging. 2018 May;37(5):1276-1287. doi: 10.1109/TMI.2017.2787672.
10
Articulated human detection with flexible mixtures of parts.具有灵活部件混合的关节式人体检测。
IEEE Trans Pattern Anal Mach Intell. 2013 Dec;35(12):2878-90. doi: 10.1109/TPAMI.2012.261.

引用本文的文献

1
Use of artificial intelligence in the analysis of digital videos of invasive surgical procedures: scoping review.人工智能在侵入性外科手术数字视频分析中的应用:范围综述。
BJS Open. 2025 Jul 1;9(4). doi: 10.1093/bjsopen/zraf073.
2
Artificial intelligence integration in surgery through hand and instrument tracking: a systematic literature review.通过手部和器械追踪将人工智能整合到手术中:一项系统的文献综述
Front Surg. 2025 Feb 26;12:1528362. doi: 10.3389/fsurg.2025.1528362. eCollection 2025.
3
ArthroNavi framework: stereo endoscope-guided instrument localization for arthroscopic minimally invasive surgeries.
ArthroNavi 框架:用于关节镜微创手术的立体内窥镜引导器械定位。
J Biomed Opt. 2023 Oct;28(10):106002. doi: 10.1117/1.JBO.28.10.106002. Epub 2023 Oct 14.
4
Multi-Stage Temporal Convolutional Network with Moment Loss and Positional Encoding for Surgical Phase Recognition.基于矩损失和位置编码的多阶段时间卷积网络用于手术阶段识别
Diagnostics (Basel). 2022 Dec 29;13(1):107. doi: 10.3390/diagnostics13010107.
5
Capturing fine-grained details for video-based automation of suturing skills assessment.捕捉基于视频的缝合技能评估自动化的细粒度细节。
Int J Comput Assist Radiol Surg. 2023 Mar;18(3):545-552. doi: 10.1007/s11548-022-02778-x. Epub 2022 Oct 25.