用于皮肤病变图像分类的逐步自知识蒸馏

Stepwise self-knowledge distillation for skin lesion image classification.

作者信息

Zheng Jian, Xie Kewei, Zhang Dingwen, Lv Zhiming, Yu Xiangchun

机构信息

Yichun Lithium New Energy Industry Research Institute, Jiangxi University of Science and Technology, No. 5 Chunhua Road, Yichun, 336000, Jiangxi, China.

Jiangxi Provincial Key Laboratory of Multidimensional Intelligent Perception and Control, School of Information Engineering, Jiangxi University of Science and Technology, Ganzhou, 341000, China.

出版信息

Sci Rep. 2025 Jul 12;15(1):25238. doi: 10.1038/s41598-025-10717-4.

Abstract

Self-knowledge distillation, which involves using the same network structure for both the teacher and student models, has gained considerable attention in the field of medical image classification. This approach enables knowledge distillation without requiring pre-training the teacher model. However, current self-knowledge distillation methods encounter difficulties in determining appropriate learning objectives for the next stage, which limits the improvement potential of the student model. In this paper, we present a Stepwise Self-Knowledge Distillation framework called SW-SKD, which is utilized to enhance the performance of dermatological image classification. Our framework incorporates a stepwise distillation strategy to efficiently explore the learning objectives by the feature rectification block (FRB) and the logit rectification block (LRB). In the FRB block, we extract the attention of the last stage of the network backbone and consider the attention-corrected features as the learning objective. The stepwise distillation based on FRB is accomplished by performing attention-based intermediate feature distillation from back to front; the LRB block implements logit-based knowledge distillation by adjusting the maximum value of the logit prediction output to match the correct index. This adjustment based on LRB serves as the learning objective for the next stage, progressing from back to front. Our proposed SW-SKD framework effectively improves dermatological image classification. To prove its effectiveness, extensive experiments are conducted on HAM10000, ISIC2019, and Dermnet datasets. On HAM10000 with ResNet50 and ResNet101 serving as the baseline networks, compared to the second-best method, Precision improves by 0.8% and 1.4%, and Recall by 2.1% and 0.9% after weighted averaging. On ISIC2019 with the same baseline networks, average Precision improves by 0.5% and 0.9%, and average Recall by 1.1% and 0.7%. It also outperforms other mainstream methods. The results show SW-SKD can significantly enhance the student model's performance in dermatological classification.

摘要

自知识蒸馏涉及教师模型和学生模型使用相同的网络结构,在医学图像分类领域受到了广泛关注。这种方法无需对教师模型进行预训练即可实现知识蒸馏。然而,当前的自知识蒸馏方法在为下一阶段确定合适的学习目标时遇到困难,这限制了学生模型的改进潜力。在本文中,我们提出了一种名为SW-SKD的逐步自知识蒸馏框架,用于提高皮肤图像分类的性能。我们的框架采用逐步蒸馏策略,通过特征校正块(FRB)和逻辑校正块(LRB)有效地探索学习目标。在FRB块中,我们提取网络主干最后阶段的注意力,并将注意力校正后的特征作为学习目标。基于FRB的逐步蒸馏通过从后向前执行基于注意力的中间特征蒸馏来完成;LRB块通过调整逻辑预测输出的最大值以匹配正确索引来实现基于逻辑的知识蒸馏。基于LRB的这种调整作为下一阶段的学习目标,从后向前推进。我们提出的SW-SKD框架有效地改进了皮肤图像分类。为了证明其有效性,我们在HAM10000、ISIC2019和Dermnet数据集上进行了广泛的实验。在以ResNet50和ResNet101作为基线网络的HAM10000上,与次优方法相比,加权平均后Precision提高了0.8%和1.4%,Recall提高了2.1%和0.9%。在以相同基线网络的ISIC2019上,平均Precision提高了0.5%和0.9%,平均Recall提高了1.1%和0.7%。它还优于其他主流方法。结果表明,SW-SKD可以显著提高学生模型在皮肤分类中的性能。

相似文献

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索