Suppr超能文献

抗遮挡无标记手术器械位姿估计

Occlusion-robust markerless surgical instrument pose estimation.

作者信息

Xu Haozheng, Giannarou Stamatia

机构信息

Hamlyn Centre for Robotic Surgery, Department of Surgery and Cancer Imperial College London London UK.

出版信息

Healthc Technol Lett. 2024 Nov 27;11(6):327-335. doi: 10.1049/htl2.12100. eCollection 2024 Dec.

Abstract

The estimation of the pose of surgical instruments is important in Robot-assisted Minimally Invasive Surgery (RMIS) to assist surgical navigation and enable autonomous robotic task execution. The performance of current instrument pose estimation methods deteriorates significantly in the presence of partial tool visibility, occlusions, and changes in the surgical scene. In this work, a vision-based framework is proposed for markerless estimation of the 6DoF pose of surgical instruments. To deal with partial instrument visibility, a keypoint object representation is used and stable and accurate instrument poses are computed using a PnP solver. To boost the learning process of the model under occlusion, a new mask-based data augmentation approach has been proposed. To validate the model, a dataset for instrument pose estimation with highly accurate ground truth data has been generated using different surgical robotic instruments. The proposed network can achieve submillimeter accuracy and the experimental results verify its generalisability to different shapes of occlusion.

摘要

在机器人辅助微创手术(RMIS)中,手术器械位姿估计对于辅助手术导航和实现自主机器人任务执行至关重要。在存在部分工具可见性、遮挡和手术场景变化的情况下,当前器械位姿估计方法的性能会显著下降。在这项工作中,提出了一种基于视觉的框架,用于无标记估计手术器械的6自由度位姿。为了处理部分器械可见性问题,使用了关键点对象表示,并使用PnP求解器计算稳定且准确的器械位姿。为了在遮挡情况下加速模型的学习过程,提出了一种新的基于掩码的数据增强方法。为了验证该模型,使用不同的手术机器人器械生成了一个具有高精度地面真值数据的器械位姿估计数据集。所提出的网络可以实现亚毫米级的精度,实验结果验证了其对不同形状遮挡的通用性。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7b76/11665797/e65dee8218b7/HTL2-11-327-g006.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验