Suppr超能文献

FUN-SIS:一种用于手术器械分割的完全无监督方法。

FUN-SIS: A Fully UNsupervised approach for Surgical Instrument Segmentation.

作者信息

Sestini Luca, Rosa Benoit, De Momi Elena, Ferrigno Giancarlo, Padoy Nicolas

机构信息

ICube, University of Strasbourg, CNRS, IHU Strasbourg, France; Department of Electronics, Information and Bioengineering, Politecnico di Milano, Milano, Italy.

ICube, University of Strasbourg, CNRS, IHU Strasbourg, France.

出版信息

Med Image Anal. 2023 Apr;85:102751. doi: 10.1016/j.media.2023.102751. Epub 2023 Jan 20.

Abstract

Automatic surgical instrument segmentation of endoscopic images is a crucial building block of many computer-assistance applications for minimally invasive surgery. So far, state-of-the-art approaches completely rely on the availability of a ground-truth supervision signal, obtained via manual annotation, thus expensive to collect at large scale. In this paper, we present FUN-SIS, a Fully-UNsupervised approach for binary Surgical Instrument Segmentation. FUN-SIS trains a per-frame segmentation model on completely unlabelled endoscopic videos, by solely relying on implicit motion information and instrument shape-priors. We define shape-priors as realistic segmentation masks of the instruments, not necessarily coming from the same dataset/domain as the videos. The shape-priors can be collected in various and convenient ways, such as recycling existing annotations from other datasets. We leverage them as part of a novel generative-adversarial approach, allowing to perform unsupervised instrument segmentation of optical-flow images during training. We then use the obtained instrument masks as pseudo-labels in order to train a per-frame segmentation model; to this aim, we develop a learning-from-noisy-labels architecture, designed to extract a clean supervision signal from these pseudo-labels, leveraging their peculiar noise properties. We validate the proposed contributions on three surgical datasets, including the MICCAI 2017 EndoVis Robotic Instrument Segmentation Challenge dataset. The obtained fully-unsupervised results for surgical instrument segmentation are almost on par with the ones of fully-supervised state-of-the-art approaches. This suggests the tremendous potential of the proposed method to leverage the great amount of unlabelled data produced in the context of minimally invasive surgery.

摘要

内窥镜图像的自动手术器械分割是许多微创手术计算机辅助应用的关键组成部分。到目前为止,最先进的方法完全依赖于通过人工标注获得的真实监督信号,因此大规模收集成本高昂。在本文中,我们提出了FUN-SIS,一种用于二元手术器械分割的完全无监督方法。FUN-SIS通过仅依赖隐式运动信息和器械形状先验,在完全未标记的内窥镜视频上训练每帧分割模型。我们将形状先验定义为器械的真实分割掩码,不一定来自与视频相同的数据集/域。形状先验可以通过各种方便的方式收集,例如回收其他数据集的现有标注。我们将它们作为一种新颖的生成对抗方法的一部分,允许在训练期间对光流图像进行无监督器械分割。然后,我们将获得的器械掩码用作伪标签,以训练每帧分割模型;为此,我们开发了一种从噪声标签学习的架构,旨在利用这些伪标签的特殊噪声属性从其中提取干净的监督信号。我们在三个手术数据集上验证了所提出的贡献,包括MICCAI 2017 EndoVis机器人器械分割挑战赛数据集。所获得的手术器械分割的完全无监督结果几乎与完全监督的最先进方法相当。这表明所提出的方法在利用微创手术中产生的大量未标记数据方面具有巨大潜力。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验