非等效图像与像素:用于息肉分割的基于元学习混合的置信度感知重采样

Non-equivalent images and pixels: Confidence-aware resampling with meta-learning mixup for polyp segmentation.

作者信息

Guo Xiaoqing, Chen Zhen, Liu Jun, Yuan Yixuan

机构信息

Department of Electrical Engineering, City University of Hong Kong, Hong Kong SAR, China.

Department of Mechanical Engineering, City University of Hong Kong, Hong Kong SAR, China.

出版信息

Med Image Anal. 2022 May;78:102394. doi: 10.1016/j.media.2022.102394. Epub 2022 Feb 18.

Abstract

Automatic segmentation of polyp regions in endoscope images is essential for the early diagnosis and surgical planning of colorectal cancer. Recently, deep learning-based approaches have achieved remarkable progress for polyp segmentation, but they are at the expense of laborious large-scale pixel-wise annotations. In addition, these models treat samples equally, which may cause unstable training due to polyp variability. To address these issues, we propose a novel Meta-Learning Mixup (MLMix) data augmentation method and a Confidence-Aware Resampling (CAR) strategy for polyp segmentation. MLMix adaptively learns the interpolation policy for mixup data in a data-driven way, thereby transferring the original soft mixup label to a reliable hard label and enriching the limited training dataset. Considering the difficulty of polyp image variability in segmentation, the CAR strategy is proposed to progressively select relatively confident images and pixels to facilitate the representation ability of model and ensure the stability of the training procedure. Moreover, the CAR strategy leverages class distribution prior knowledge and assigns different penalty coefficients for polyp and normal classes to rebalance the selected data distribution. The effectiveness of the proposed MLMix data augmentation method and CAR strategy is demonstrated through comprehensive experiments, and our proposed model achieves state-of-the-art performance with 87.450% dice on the EndoScene test set and 86.453% dice on the wireless capsule endoscopy (WCE) polyp dataset.

摘要

内窥镜图像中息肉区域的自动分割对于结直肠癌的早期诊断和手术规划至关重要。最近,基于深度学习的方法在息肉分割方面取得了显著进展,但它们是以费力的大规模逐像素标注为代价的。此外,这些模型平等对待样本,由于息肉的变异性,这可能导致训练不稳定。为了解决这些问题,我们提出了一种新颖的元学习混合(MLMix)数据增强方法和一种用于息肉分割的置信度感知重采样(CAR)策略。MLMix以数据驱动的方式自适应地学习混合数据的插值策略,从而将原始的软混合标签转换为可靠的硬标签,并丰富有限的训练数据集。考虑到息肉图像在分割中的变异性难度,提出了CAR策略来逐步选择相对置信的图像和像素,以促进模型的表示能力并确保训练过程的稳定性。此外,CAR策略利用类分布先验知识,并为息肉和正常类分配不同的惩罚系数,以重新平衡所选数据的分布。通过全面实验证明了所提出的MLMix数据增强方法和CAR策略的有效性,我们提出的模型在EndoScene测试集上以87.450%的骰子系数和在无线胶囊内窥镜(WCE)息肉数据集上以86.453%的骰子系数达到了当前最优性能。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索