Suppr超能文献

使用多视图投票提高机器人刷手护士的器械检测能力。

Improving instrument detection for a robotic scrub nurse using multi-view voting.

作者信息

Badilla-Solórzano Jorge, Ihler Sontje, Gellrich Nils-Claudius, Spalthoff Simon

机构信息

Institute of Mechatronic Systems, Leibniz University Hannover, Garbsen, Germany.

Department of Cranio-Maxillofacial Surgery, Hannover Medical School, Hannover, Germany.

出版信息

Int J Comput Assist Radiol Surg. 2023 Nov;18(11):1961-1968. doi: 10.1007/s11548-023-03002-0. Epub 2023 Aug 2.

Abstract

PURPOSE

A basic task of a robotic scrub nurse is surgical instrument detection. Deep learning techniques could potentially address this task; nevertheless, their performance is subject to some degree of error, which could render them unsuitable for real-world applications. In this work, we aim to demonstrate how the combination of a trained instrument detector with an instance-based voting scheme that considers several frames and viewpoints is enough to guarantee a strong improvement in the instrument detection task.

METHODS

We exploit the typical setup of a robotic scrub nurse to collect RGB data and point clouds from different viewpoints. Using trained Mask R-CNN models, we obtain predictions from each view. We propose a multi-view voting scheme based on predicted instances that combines the gathered data and predictions to produce a reliable map of the location of the instruments in the scene.

RESULTS

Our approach reduces the number of errors by more than 82% compared with the single-view case. On average, the data from five viewpoints are sufficient to infer the correct instrument arrangement with our best model.

CONCLUSION

Our approach can drastically improve an instrument detector's performance. Our method is practical and can be applied during an actual medical procedure without negatively affecting the surgical workflow. Our implementation and data are made available for the scientific community ( https://github.com/Jorebs/Multi-view-Voting-Scheme ).

摘要

目的

机器人洗手护士的一项基本任务是手术器械检测。深度学习技术有可能解决这项任务;然而,它们的性能存在一定程度的误差,这可能使其不适用于实际应用。在这项工作中,我们旨在证明将经过训练的器械检测器与考虑多个帧和视角的基于实例的投票方案相结合,足以保证在器械检测任务上有显著改进。

方法

我们利用机器人洗手护士的典型设置,从不同视角收集RGB数据和点云。使用经过训练的Mask R-CNN模型,我们从每个视角获得预测结果。我们提出一种基于预测实例的多视角投票方案,该方案结合收集到的数据和预测结果,以生成场景中器械位置的可靠地图。

结果

与单视角情况相比,我们的方法将错误数量减少了82%以上。平均而言,来自五个视角的数据足以用我们最好的模型推断出正确的器械排列。

结论

我们的方法可以显著提高器械检测器的性能。我们的方法是实用的,可以在实际医疗过程中应用,而不会对手术流程产生负面影响。我们的实现和数据已提供给科学界(https://github.com/Jorebs/Multi-view-Voting-Scheme)。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/684d/10589190/51a8860b8ef7/11548_2023_3002_Fig1_HTML.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验