Gao Jiuyang, Li Siyu, Xia Wenfeng, Yu Jiuyang, Dai Yaonan
Hubei Provincial Engineering Technology Research Center of Green Chemical Equipment, School of Mechanical and Electrical Engineering, Wuhan Institute of Technology, Wuhan 430205, China.
School of Artificial Intelligence and Automation, Huazhong University of Science and Technology, Wuhan 430205, China.
Sensors (Basel). 2024 Mar 18;24(6):1939. doi: 10.3390/s24061939.
With the development of deep learning and sensors and sensor collection methods, computer vision inspection technology has developed rapidly. The deep-learning-based classification algorithm requires the acquisition of a model with superior generalization capabilities through the utilization of a substantial quantity of training samples. However, due to issues such as privacy, annotation costs, and sensor-captured images, how to make full use of limited samples has become a major challenge for practical training and deployment. Furthermore, when simulating models and transferring them to actual image scenarios, discrepancies often arise between the common training sets and the target domain (domain offset). Currently, meta-learning offers a promising solution for few-shot learning problems. However, the quantity of supporting set data on the target domain remains limited, leading to limited cross-domain learning effectiveness. To address this challenge, we have developed a self-distillation and mixing (SDM) method utilizing a Teacher-Student framework. This method effectively transfers knowledge from the source domain to the target domain by applying self-distillation techniques and mixed data augmentation, learning better image representations from relatively abundant datasets, and achieving fine-tuning in the target domain. In comparison with nine classical models, the experimental results demonstrate that the SDM method excels in terms of training time and accuracy. Furthermore, SDM effectively transfers knowledge from the source domain to the target domain, even with a limited number of target domain samples.
随着深度学习以及传感器和传感器采集方法的发展,计算机视觉检测技术迅速发展。基于深度学习的分类算法需要通过利用大量训练样本获取具有卓越泛化能力的模型。然而,由于隐私、标注成本以及传感器捕获图像等问题,如何充分利用有限样本已成为实际训练和部署的一大挑战。此外,在模拟模型并将其转移到实际图像场景时,常见训练集与目标域之间常常会出现差异(域偏移)。目前,元学习为少样本学习问题提供了一个有前景的解决方案。然而,目标域上的支持集数据量仍然有限,导致跨域学习效果受限。为应对这一挑战,我们开发了一种利用师生框架的自蒸馏与混合(SDM)方法。该方法通过应用自蒸馏技术和混合数据增强,有效地将知识从源域转移到目标域,从相对丰富的数据集中学习更好的图像表示,并在目标域中实现微调。与九个经典模型相比,实验结果表明SDM方法在训练时间和准确率方面表现出色。此外,即使目标域样本数量有限,SDM也能有效地将知识从源域转移到目标域。