Computer Vision Institute, College of Computer Science and Software Engineering, Shenzhen University, Shenzhen, China.
Youtu Lab, Tencent, Shenzhen, China.
BMC Bioinformatics. 2019 Aug 28;20(1):445. doi: 10.1186/s12859-019-2979-y.
Due to the recent advances in deep learning, this model attracted researchers who have applied it to medical image analysis. However, pathological image analysis based on deep learning networks faces a number of challenges, such as the high resolution (gigapixel) of pathological images and the lack of annotation capabilities. To address these challenges, we propose a training strategy called deep-reverse active learning (DRAL) and atrous DenseNet (ADN) for pathological image classification. The proposed DRAL can improve the classification accuracy of widely used deep learning networks such as VGG-16 and ResNet by removing mislabeled patches in the training set. As the size of a cancer area varies widely in pathological images, the proposed ADN integrates the atrous convolutions with the dense block for multiscale feature extraction.
The proposed DRAL and ADN are evaluated using the following three pathological datasets: BACH, CCG, and UCSB. The experiment results demonstrate the excellent performance of the proposed DRAL + ADN framework, achieving patch-level average classification accuracies (ACA) of 94.10%, 92.05% and 97.63% on the BACH, CCG, and UCSB validation sets, respectively.
The DRAL + ADN framework is a potential candidate for boosting the performance of deep learning models for partially mislabeled training datasets.
由于深度学习的最新进展,该模型吸引了研究人员将其应用于医学图像分析。然而,基于深度学习网络的病理图像分析面临着一些挑战,例如病理图像的高分辨率(千兆像素)和缺乏注释能力。为了解决这些挑战,我们提出了一种称为深度反向主动学习(DRAL)和多孔 DenseNet(ADN)的病理图像分类训练策略。所提出的 DRAL 可以通过从训练集中删除错误标记的补丁来提高广泛使用的深度学习网络(如 VGG-16 和 ResNet)的分类准确性。由于癌症区域在病理图像中的大小差异很大,因此所提出的 ADN 将多孔卷积与密集块集成在一起,用于多尺度特征提取。
使用以下三个病理数据集对所提出的 DRAL 和 ADN 进行了评估:BACH、CCG 和 UCSB。实验结果表明,所提出的 DRAL + ADN 框架具有出色的性能,在 BACH、CCG 和 UCSB 验证集上的平均分类准确率(ACA)分别达到了 94.10%、92.05%和 97.63%。
DRAL + ADN 框架是提高部分标记不良训练数据集的深度学习模型性能的潜在候选者。