Suppr超能文献

用于自监督医学图像分割的边界感知信息最大化

Boundary-aware information maximization for self-supervised medical image segmentation.

作者信息

Peng Jizong, Wang Ping, Pedersoli Marco, Desrosiers Christian

机构信息

ETS Montréal, 1100 Notre-Dame St W, Montreal H3C 1K3, QC, Canada.

ETS Montréal, 1100 Notre-Dame St W, Montreal H3C 1K3, QC, Canada.

出版信息

Med Image Anal. 2024 May;94:103150. doi: 10.1016/j.media.2024.103150. Epub 2024 Mar 28.

Abstract

Self-supervised representation learning can boost the performance of a pre-trained network on downstream tasks for which labeled data is limited. A popular method based on this paradigm, known as contrastive learning, works by constructing sets of positive and negative pairs from the data, and then pulling closer the representations of positive pairs while pushing apart those of negative pairs. Although contrastive learning has been shown to improve performance in various classification tasks, its application to image segmentation has been more limited. This stems in part from the difficulty of defining positive and negative pairs for dense feature maps without having access to pixel-wise annotations. In this work, we propose a novel self-supervised pre-training method that overcomes the challenges of contrastive learning in image segmentation. Our method leverages Information Invariant Clustering (IIC) as an unsupervised task to learn a local representation of images in the decoder of a segmentation network, but addresses three important drawbacks of this approach: (i) the difficulty of optimizing the loss based on mutual information maximization; (ii) the lack of clustering consistency for different random transformations of the same image; (iii) the poor correspondence of clusters obtained by IIC with region boundaries in the image. Toward this goal, we first introduce a regularized mutual information maximization objective that encourages the learned clusters to be balanced and consistent across different image transformations. We also propose a boundary-aware loss based on cross-correlation, which helps the learned clusters to be more representative of important regions in the image. Compared to contrastive learning applied in dense features, our method does not require computing positive and negative pairs and also enhances interpretability through the visualization of learned clusters. Comprehensive experiments involving four different medical image segmentation tasks reveal the high effectiveness of our self-supervised representation learning method. Our results show the proposed method to outperform by a large margin several state-of-the-art self-supervised and semi-supervised approaches for segmentation, reaching a performance close to full supervision with only a few labeled examples.

摘要

自监督表示学习可以提高预训练网络在有标签数据有限的下游任务上的性能。基于这种范式的一种流行方法,即对比学习,通过从数据中构建正例和负例对集来工作,然后拉近正例对的表示,同时推开负例对的表示。尽管对比学习已被证明能在各种分类任务中提高性能,但其在图像分割中的应用却较为有限。这部分源于在无法获得逐像素注释的情况下,为密集特征图定义正例和负例对的困难。在这项工作中,我们提出了一种新颖的自监督预训练方法,该方法克服了对比学习在图像分割中的挑战。我们的方法利用信息不变聚类(IIC)作为无监督任务,在分割网络的解码器中学习图像的局部表示,但解决了这种方法的三个重要缺点:(i)基于互信息最大化优化损失的困难;(ii)同一图像的不同随机变换缺乏聚类一致性;(iii)IIC获得的聚类与图像中的区域边界对应性差。为了实现这一目标,我们首先引入了一个正则化互信息最大化目标,该目标鼓励学习到的聚类在不同图像变换之间保持平衡和一致。我们还提出了一种基于互相关的边界感知损失,这有助于学习到的聚类更能代表图像中的重要区域。与应用于密集特征的对比学习相比,我们的方法不需要计算正例和负例对,并且还通过可视化学习到的聚类增强了可解释性。涉及四个不同医学图像分割任务的综合实验揭示了我们的自监督表示学习方法的高效性。我们的结果表明,所提出的方法在分割方面大大优于几种先进的自监督和半监督方法,仅使用少量有标签示例就能达到接近完全监督的性能。

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验