Department of Computer Science, Dartmouth College, Hanover, NH 03755, USA.
Department of Pathology and Laboratory Medicine, Dartmouth-Hitchcock Medical Center, Lebanon, NH 03756, USA.
Artif Intell Med. 2021 Sep;119:102136. doi: 10.1016/j.artmed.2021.102136. Epub 2021 Aug 6.
Developing deep learning models to analyze histology images has been computationally challenging, as the massive size of the images causes excessive strain on all parts of the computing pipeline. This paper proposes a novel deep learning-based methodology for improving the computational efficiency of histology image classification. The proposed approach is robust when used with images that have reduced input resolution, and it can be trained effectively with limited labeled data. Moreover, our approach operates at either the tissue- or slide-level, removing the need for laborious patch-level labeling. Our method uses knowledge distillation to transfer knowledge from a teacher model pre-trained at high resolution to a student model trained on the same images at a considerably lower resolution. Also, to address the lack of large-scale labeled histology image datasets, we perform the knowledge distillation in a self-supervised fashion. We evaluate our approach on three distinct histology image datasets associated with celiac disease, lung adenocarcinoma, and renal cell carcinoma. Our results on these datasets demonstrate that a combination of knowledge distillation and self-supervision allows the student model to approach and, in some cases, surpass the teacher model's classification accuracy while being much more computationally efficient. Additionally, we observe an increase in student classification performance as the size of the unlabeled dataset increases, indicating that there is potential for this method to scale further with additional unlabeled data. Our model outperforms the high-resolution teacher model for celiac disease in accuracy, F1-score, precision, and recall while requiring 4 times fewer computations. For lung adenocarcinoma, our results at 1.25× magnification are within 1.5% of the results for the teacher model at 10× magnification, with a reduction in computational cost by a factor of 64. Our model on renal cell carcinoma at 1.25× magnification performs within 1% of the teacher model at 5× magnification while requiring 16 times fewer computations. Furthermore, our celiac disease outcomes benefit from additional performance scaling with the use of more unlabeled data. In the case of 0.625× magnification, using unlabeled data improves accuracy by 4% over the tissue-level baseline. Therefore, our approach can improve the feasibility of deep learning solutions for digital pathology on standard computational hardware and infrastructures.
开发用于分析组织学图像的深度学习模型在计算方面具有挑战性,因为图像的巨大尺寸会对计算管道的各个部分造成过大的压力。本文提出了一种新的基于深度学习的方法,用于提高组织学图像分类的计算效率。该方法在输入分辨率降低的图像中具有鲁棒性,并且可以在有限的标记数据上进行有效训练。此外,我们的方法可以在组织或幻灯片级别上运行,从而无需费力的补丁级标记。我们的方法使用知识蒸馏将来自在高分辨率下预先训练的教师模型的知识转移到在相同图像上以低得多的分辨率训练的学生模型。此外,为了解决缺乏大规模标记的组织学图像数据集的问题,我们以自监督的方式进行知识蒸馏。我们在三个不同的与乳糜泻、肺腺癌和肾细胞癌相关的组织学图像数据集上评估了我们的方法。我们在这些数据集上的结果表明,知识蒸馏和自我监督的结合使学生模型能够接近并且在某些情况下超过教师模型的分类准确性,同时计算效率更高。此外,随着未标记数据集的增大,我们观察到学生分类性能的提高,这表明该方法具有随着更多未标记数据进一步扩展的潜力。我们的模型在乳糜泻的准确性、F1 分数、精度和召回率方面优于高分辨率的教师模型,而计算量减少了 4 倍。对于肺腺癌,我们在 1.25 倍放大倍数下的结果与教师模型在 10 倍放大倍数下的结果相差不到 1.5%,计算成本降低了 64 倍。我们在 1.25 倍放大倍数下的肾细胞癌模型在 5 倍放大倍数下与教师模型的性能相差不到 1%,而计算量减少了 16 倍。此外,我们的乳糜泻结果通过使用更多的未标记数据获得了额外的性能扩展。在 0.625 倍放大倍数的情况下,使用未标记数据将准确率提高了 4%,超过了组织级别的基线。因此,我们的方法可以提高深度学习解决方案在标准计算硬件和基础设施上用于数字病理学的可行性。
Artif Intell Med. 2021-9
Artif Intell Med. 2021-11
IEEE J Biomed Health Inform. 2023-4
Comput Med Imaging Graph. 2024-3
Comput Methods Programs Biomed. 2022-11
IEEE Trans Cybern. 2020-9
Health Inf Sci Syst. 2024-11-29
Biomimetics (Basel). 2024-8-14
J Pathol Inform. 2024-5-31
Proc IEEE Int Conf Comput Vis. 2023
Front Oncol. 2023-12-11
Bioengineering (Basel). 2023-11-2
J Open Source Softw. 2019
IEEE Trans Med Imaging. 2021-7
IEEE Trans Med Imaging. 2021-10
IEEE Trans Pattern Anal Mach Intell. 2022-3
IEEE Trans Med Imaging. 2020-12
Cancers (Basel). 2019-10-28
Rev Esp Patol. 2019
J Pathol Inform. 2019-7-1