RCKD:用于病理图像分析的基于响应的跨任务知识蒸馏
RCKD: Response-Based Cross-Task Knowledge Distillation for Pathological Image Analysis.
作者信息
Kim Hyunil, Kwak Tae-Yeong, Chang Hyeyoon, Kim Sun Woo, Kim Injung
机构信息
Deep Bio Inc., Seoul 08380, Republic of Korea.
School of Computer Science and Electrical Engineering, Handong Global University, Pohang 37554, Republic of Korea.
出版信息
Bioengineering (Basel). 2023 Nov 2;10(11):1279. doi: 10.3390/bioengineering10111279.
We propose a novel transfer learning framework for pathological image analysis, the Response-based Cross-task Knowledge Distillation (RCKD), which improves the performance of the model by pretraining it on a large unlabeled dataset guided by a high-performance teacher model. RCKD first pretrains a student model to predict the nuclei segmentation results of the teacher model for unlabeled pathological images, and then fine-tunes the pretrained model for the downstream tasks, such as organ cancer sub-type classification and cancer region segmentation, using relatively small target datasets. Unlike conventional knowledge distillation, RCKD does not require that the target tasks of the teacher and student models be the same. Moreover, unlike conventional transfer learning, RCKD can transfer knowledge between models with different architectures. In addition, we propose a lightweight architecture, the Convolutional neural network with Spatial Attention by Transformers (CSAT), for processing high-resolution pathological images with limited memory and computation. CSAT exhibited a top-1 accuracy of 78.6% on ImageNet with only 3M parameters and 1.08 G multiply-accumulate (MAC) operations. When pretrained by RCKD, CSAT exhibited average classification and segmentation accuracies of 94.2% and 0.673 mIoU on six pathological image datasets, which is 4% and 0.043 mIoU higher than EfficientNet-B0, and 7.4% and 0.006 mIoU higher than ConvNextV2-Atto pretrained on ImageNet, respectively.
我们提出了一种用于病理图像分析的新型迁移学习框架——基于响应的跨任务知识蒸馏(RCKD),它通过在由高性能教师模型引导的大型未标记数据集上对模型进行预训练来提高模型性能。RCKD首先预训练一个学生模型,以预测教师模型对未标记病理图像的细胞核分割结果,然后使用相对较小的目标数据集对预训练模型进行微调,以用于下游任务,如器官癌亚型分类和癌区域分割。与传统的知识蒸馏不同,RCKD不要求教师模型和学生模型的目标任务相同。此外,与传统的迁移学习不同,RCKD可以在具有不同架构的模型之间传递知识。此外,我们提出了一种轻量级架构——带变压器空间注意力的卷积神经网络(CSAT),用于在内存和计算有限的情况下处理高分辨率病理图像。CSAT在ImageNet上仅用3M参数和1.08G乘加(MAC)运算就展现出了78.6%的top-1准确率。当通过RCKD进行预训练时,CSAT在六个病理图像数据集上的平均分类准确率和分割准确率分别为94.2%和0.673 mIoU,分别比EfficientNet-B0高4%和0.043 mIoU,比在ImageNet上预训练的ConvNextV2-Atto高7.4%和0.006 mIoU。
相似文献
Bioengineering (Basel). 2023-11-2
J Imaging Inform Med. 2025-4
Artif Intell Med. 2021-9
IEEE Trans Neural Netw Learn Syst. 2024-7
Comput Biol Med. 2023-9
Comput Methods Programs Biomed. 2022-11
引用本文的文献
本文引用的文献
Med Image Anal. 2023-10
IEEE J Biomed Health Inform. 2023-4
Artif Intell Med. 2021-9
IEEE Trans Med Imaging. 2021-10
Front Med. 2020-8
IEEE Trans Med Imaging. 2020-5
Front Med (Lausanne). 2019-10-1
Med Image Anal. 2019-5-31
IEEE Trans Med Imaging. 2019-2