Department of Computer Science and Engineering, The Chinese University of Hong Kong, Hong Kong, China.
Department of Computer Science and Engineering, The Chinese University of Hong Kong, Hong Kong, China.
Med Image Anal. 2021 May;70:102010. doi: 10.1016/j.media.2021.102010. Epub 2021 Feb 22.
Convolutional neural networks have achieved prominent success on a variety of medical imaging tasks when a large amount of labeled training data is available. However, the acquisition of expert annotations for medical data is usually expensive and time-consuming, which poses a great challenge for supervised learning approaches. In this work, we proposed a novel semi-supervised deep learning method, i.e., deep virtual adversarial self-training with consistency regularization, for large-scale medical image classification. To effectively exploit useful information from unlabeled data, we leverage self-training and consistency regularization to harness the underlying knowledge, which helps improve the discrimination capability of training models. More concretely, the model first uses its prediction for pseudo-labeling on the weakly-augmented input image. A pseudo-label is kept only if the corresponding class probability is of high confidence. Then the model prediction is encouraged to be consistent with the strongly-augmented version of the same input image. To improve the robustness of the network against virtual adversarial perturbed input, we incorporate virtual adversarial training (VAT) on both labeled and unlabeled data into the course of training. Hence, the network is trained by minimizing a combination of three types of losses, including a standard supervised loss on labeled data, a consistency regularization loss on unlabeled data, and a VAT loss on both labeled and labeled data. We extensively evaluate the proposed semi-supervised deep learning methods on two challenging medical image classification tasks: breast cancer screening from ultrasound images and multi-class ophthalmic disease classification from optical coherence tomography B-scan images. Experimental results demonstrate that the proposed method outperforms both supervised baseline and other state-of-the-art methods by a large margin on all tasks.
当有大量标记的训练数据时,卷积神经网络在各种医学影像任务上取得了显著的成功。然而,获取医学数据的专家注释通常是昂贵且耗时的,这对监督学习方法提出了巨大的挑战。在这项工作中,我们提出了一种新颖的半监督深度学习方法,即具有一致性正则化的深度虚拟对抗自训练,用于大规模医学图像分类。为了有效地从未标记数据中利用有用信息,我们利用自训练和一致性正则化来利用潜在知识,这有助于提高训练模型的辨别能力。更具体地说,模型首先使用其对弱增强输入图像的预测进行伪标记。只有当对应类的概率具有高置信度时,才保留伪标签。然后,鼓励模型预测与同一输入图像的强增强版本一致。为了提高网络对虚拟对抗扰动输入的鲁棒性,我们将虚拟对抗训练 (VAT) 纳入有标签和无标签数据的训练过程中。因此,网络通过最小化三种类型的损失的组合来进行训练,包括有标签数据上的标准监督损失、无标签数据上的一致性正则化损失以及有标签和无标签数据上的 VAT 损失。我们在两个具有挑战性的医学图像分类任务上广泛评估了所提出的半监督深度学习方法:从超声图像进行乳腺癌筛查和从光学相干断层扫描 B 扫描图像进行多类眼科疾病分类。实验结果表明,在所提出的方法在所有任务上均优于监督基线和其他最先进的方法。
Comput Biol Med. 2022-11
IEEE Trans Pattern Anal Mach Intell. 2019-8
Curr Pharm Biotechnol. 2024-6-6
Quant Imaging Med Surg. 2024-5-1
Med Biol Eng Comput. 2024-4-24
Quant Imaging Med Surg. 2022-6
Cancers (Basel). 2022-3-16