Suppr超能文献

用于半监督医学图像分割的多模态对比互学习和伪标签重新学习

Multi-modal contrastive mutual learning and pseudo-label re-learning for semi-supervised medical image segmentation.

作者信息

Zhang Shuo, Zhang Jiaojiao, Tian Biao, Lukasiewicz Thomas, Xu Zhenghua

机构信息

State Key Laboratory of Reliability and Intelligence of Electrical Equipment, School of Health Sciences and Biomedical Engineering, Hebei University of Technology, China; Tianjin Key Laboratory of Bioelectromagnetic Technology and Intelligent Health, School of Health Sciences and Biomedical Engineering, Hebei University of Technology, China.

Department of Computer Science, University of Oxford, United Kingdom.

出版信息

Med Image Anal. 2023 Jan;83:102656. doi: 10.1016/j.media.2022.102656. Epub 2022 Oct 17.

Abstract

Semi-supervised learning has a great potential in medical image segmentation tasks with a few labeled data, but most of them only consider single-modal data. The excellent characteristics of multi-modal data can improve the performance of semi-supervised segmentation for each image modality. However, a shortcoming for most existing multi-modal solutions is that as the corresponding processing models of the multi-modal data are highly coupled, multi-modal data are required not only in the training but also in the inference stages, which thus limits its usage in clinical practice. Consequently, we propose a semi-supervised contrastive mutual learning (Semi-CML) segmentation framework, where a novel area-similarity contrastive (ASC) loss leverages the cross-modal information and prediction consistency between different modalities to conduct contrastive mutual learning. Although Semi-CML can improve the segmentation performance of both modalities simultaneously, there is a performance gap between two modalities, i.e., there exists a modality whose segmentation performance is usually better than that of the other. Therefore, we further develop a soft pseudo-label re-learning (PReL) scheme to remedy this gap. We conducted experiments on two public multi-modal datasets. The results show that Semi-CML with PReL greatly outperforms the state-of-the-art semi-supervised segmentation methods and achieves a similar (and sometimes even better) performance as fully supervised segmentation methods with 100% labeled data, while reducing the cost of data annotation by 90%. We also conducted ablation studies to evaluate the effectiveness of the ASC loss and the PReL module.

摘要

半监督学习在医学图像分割任务中,利用少量标注数据具有巨大潜力,但大多数方法仅考虑单模态数据。多模态数据的优良特性可提升每种图像模态的半监督分割性能。然而,大多数现有多模态解决方案的一个缺点是,由于多模态数据的相应处理模型高度耦合,不仅在训练阶段,而且在推理阶段都需要多模态数据,这限制了其在临床实践中的应用。因此,我们提出了一种半监督对比互学习(Semi-CML)分割框架,其中一种新颖的区域相似性对比(ASC)损失利用不同模态之间的跨模态信息和预测一致性来进行对比互学习。尽管Semi-CML可以同时提高两种模态的分割性能,但两种模态之间存在性能差距,即存在一种模态的分割性能通常优于另一种模态。因此,我们进一步开发了一种软伪标签重新学习(PReL)方案来弥补这一差距。我们在两个公开的多模态数据集上进行了实验。结果表明,带有PReL的Semi-CML大大优于当前最先进的半监督分割方法,并且在仅使用10%标注数据的情况下,实现了与完全监督分割方法相似(有时甚至更好)的性能,同时将数据标注成本降低了90%。我们还进行了消融研究,以评估ASC损失和PReL模块的有效性。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验