University of Electronic Science and Technology of China, Chengdu, China.
Sichuan Provincial People's Hospital, Chengdu, China.
Comput Med Imaging Graph. 2024 Sep;116:102416. doi: 10.1016/j.compmedimag.2024.102416. Epub 2024 Jul 9.
Despite that deep learning has achieved state-of-the-art performance for automatic medical image segmentation, it often requires a large amount of pixel-level manual annotations for training. Obtaining these high-quality annotations is time-consuming and requires specialized knowledge, which hinders the widespread application that relies on such annotations to train a model with good segmentation performance. Using scribble annotations can substantially reduce the annotation cost, but often leads to poor segmentation performance due to insufficient supervision. In this work, we propose a novel framework named as ScribSD+ that is based on multi-scale knowledge distillation and class-wise contrastive regularization for learning from scribble annotations. For a student network supervised by scribbles and the teacher based on Exponential Moving Average (EMA), we first introduce multi-scale prediction-level Knowledge Distillation (KD) that leverages soft predictions of the teacher network to supervise the student at multiple scales, and then propose class-wise contrastive regularization which encourages feature similarity within the same class and dissimilarity across different classes, thereby effectively improving the segmentation performance of the student network. Experimental results on the ACDC dataset for heart structure segmentation and a fetal MRI dataset for placenta and fetal brain segmentation demonstrate that our method significantly improves the student's performance and outperforms five state-of-the-art scribble-supervised learning methods. Consequently, the method has a potential for reducing the annotation cost in developing deep learning models for clinical diagnosis.
尽管深度学习在自动医学图像分割方面取得了最先进的性能,但它通常需要大量的像素级手动注释来进行训练。获得这些高质量的注释既耗时又需要专业知识,这阻碍了广泛的应用,而这些注释是依赖于训练出具有良好分割性能的模型的。使用涂鸦注释可以大大降低注释成本,但由于监督不足,通常会导致分割性能不佳。在这项工作中,我们提出了一种名为 ScribSD+的新框架,该框架基于多尺度知识蒸馏和类内对比正则化,用于从涂鸦注释中学习。对于由涂鸦和基于指数移动平均 (EMA) 的教师监督的学生网络,我们首先引入多尺度预测级知识蒸馏 (KD),该方法利用教师网络的软预测来在多个尺度上监督学生网络,然后提出类内对比正则化,鼓励同一类内的特征相似性和不同类之间的特征相异性,从而有效提高学生网络的分割性能。在心脏结构分割的 ACDC 数据集和胎盘和胎儿大脑分割的胎儿 MRI 数据集上的实验结果表明,我们的方法显著提高了学生的性能,优于五种最先进的涂鸦监督学习方法。因此,该方法有可能降低开发用于临床诊断的深度学习模型的注释成本。