Chen Lei, Cao Tieyong, Zheng Yunfei, Yang Jibin, Wang Yang, Wang Yekui, Zhang Bo
The Army Engineering University of PLA, Nanjing, China.
The PLA Army Academy of Artillery and Air Defense, Hefei, China.
PeerJ Comput Sci. 2023 Jun 19;9:e1435. doi: 10.7717/peerj-cs.1435. eCollection 2023.
Self-distillation methods utilize Kullback-Leibler divergence (KL) loss to transfer the knowledge from the network itself, which can improve the model performance without increasing computational resources and complexity. However, when applied to salient object detection (SOD), it is difficult to effectively transfer knowledge using KL. In order to improve SOD model performance without increasing computational resources, a non-negative feedback self-distillation method is proposed. Firstly, a virtual teacher self-distillation method is proposed to enhance the model generalization, which achieves good results in pixel-wise classification task but has less improvement in SOD. Secondly, to understand the behavior of the self-distillation loss, the gradient directions of KL and Cross Entropy (CE) loss are analyzed. It is found that KL can create inconsistent gradients with the opposite direction to CE in SOD. Finally, a non-negative feedback loss is proposed for SOD, which uses different ways to calculate the distillation loss of the foreground and background respectively, to ensure that the teacher network transfers only positive knowledge to the student. The experiments on five datasets show that the proposed self-distillation methods can effectively improve the performance of SOD models, and the average is increased by about 2.7% compared with the baseline network.
自蒸馏方法利用库尔贝克-莱布勒散度(KL)损失来从网络自身转移知识,这可以在不增加计算资源和复杂度的情况下提高模型性能。然而,当应用于显著目标检测(SOD)时,使用KL难以有效地转移知识。为了在不增加计算资源的情况下提高SOD模型性能,提出了一种非负反馈自蒸馏方法。首先,提出了一种虚拟教师自蒸馏方法来增强模型泛化能力,该方法在像素级分类任务中取得了良好效果,但在SOD中提升较小。其次,为了理解自蒸馏损失的行为,分析了KL和交叉熵(CE)损失的梯度方向。发现在SOD中,KL会产生与CE方向相反的不一致梯度。最后,针对SOD提出了一种非负反馈损失,该损失分别使用不同方式计算前景和背景的蒸馏损失,以确保教师网络仅向学生传递正向知识。在五个数据集上的实验表明,所提出的自蒸馏方法可以有效提高SOD模型的性能,与基线网络相比,平均提升约2.7%。