• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

基于知识蒸馏技术的跨域少样本自适应分类算法研究

Research on a Cross-Domain Few-Shot Adaptive Classification Algorithm Based on Knowledge Distillation Technology.

作者信息

Gao Jiuyang, Li Siyu, Xia Wenfeng, Yu Jiuyang, Dai Yaonan

机构信息

Hubei Provincial Engineering Technology Research Center of Green Chemical Equipment, School of Mechanical and Electrical Engineering, Wuhan Institute of Technology, Wuhan 430205, China.

School of Artificial Intelligence and Automation, Huazhong University of Science and Technology, Wuhan 430205, China.

出版信息

Sensors (Basel). 2024 Mar 18;24(6):1939. doi: 10.3390/s24061939.

DOI:10.3390/s24061939
PMID:38544201
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC10974425/
Abstract

With the development of deep learning and sensors and sensor collection methods, computer vision inspection technology has developed rapidly. The deep-learning-based classification algorithm requires the acquisition of a model with superior generalization capabilities through the utilization of a substantial quantity of training samples. However, due to issues such as privacy, annotation costs, and sensor-captured images, how to make full use of limited samples has become a major challenge for practical training and deployment. Furthermore, when simulating models and transferring them to actual image scenarios, discrepancies often arise between the common training sets and the target domain (domain offset). Currently, meta-learning offers a promising solution for few-shot learning problems. However, the quantity of supporting set data on the target domain remains limited, leading to limited cross-domain learning effectiveness. To address this challenge, we have developed a self-distillation and mixing (SDM) method utilizing a Teacher-Student framework. This method effectively transfers knowledge from the source domain to the target domain by applying self-distillation techniques and mixed data augmentation, learning better image representations from relatively abundant datasets, and achieving fine-tuning in the target domain. In comparison with nine classical models, the experimental results demonstrate that the SDM method excels in terms of training time and accuracy. Furthermore, SDM effectively transfers knowledge from the source domain to the target domain, even with a limited number of target domain samples.

摘要

随着深度学习以及传感器和传感器采集方法的发展,计算机视觉检测技术迅速发展。基于深度学习的分类算法需要通过利用大量训练样本获取具有卓越泛化能力的模型。然而,由于隐私、标注成本以及传感器捕获图像等问题,如何充分利用有限样本已成为实际训练和部署的一大挑战。此外,在模拟模型并将其转移到实际图像场景时,常见训练集与目标域之间常常会出现差异(域偏移)。目前,元学习为少样本学习问题提供了一个有前景的解决方案。然而,目标域上的支持集数据量仍然有限,导致跨域学习效果受限。为应对这一挑战,我们开发了一种利用师生框架的自蒸馏与混合(SDM)方法。该方法通过应用自蒸馏技术和混合数据增强,有效地将知识从源域转移到目标域,从相对丰富的数据集中学习更好的图像表示,并在目标域中实现微调。与九个经典模型相比,实验结果表明SDM方法在训练时间和准确率方面表现出色。此外,即使目标域样本数量有限,SDM也能有效地将知识从源域转移到目标域。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c338/10974425/6c13ad35b534/sensors-24-01939-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c338/10974425/5add1c5507d5/sensors-24-01939-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c338/10974425/986b2ab7f482/sensors-24-01939-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c338/10974425/622ba29936d2/sensors-24-01939-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c338/10974425/d839798733a6/sensors-24-01939-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c338/10974425/a87b67c6e740/sensors-24-01939-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c338/10974425/f2ae010bbf64/sensors-24-01939-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c338/10974425/6c13ad35b534/sensors-24-01939-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c338/10974425/5add1c5507d5/sensors-24-01939-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c338/10974425/986b2ab7f482/sensors-24-01939-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c338/10974425/622ba29936d2/sensors-24-01939-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c338/10974425/d839798733a6/sensors-24-01939-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c338/10974425/a87b67c6e740/sensors-24-01939-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c338/10974425/f2ae010bbf64/sensors-24-01939-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c338/10974425/6c13ad35b534/sensors-24-01939-g007.jpg

相似文献

1
Research on a Cross-Domain Few-Shot Adaptive Classification Algorithm Based on Knowledge Distillation Technology.基于知识蒸馏技术的跨域少样本自适应分类算法研究
Sensors (Basel). 2024 Mar 18;24(6):1939. doi: 10.3390/s24061939.
2
Dual Distillation Discriminator Networks for Domain Adaptive Few-Shot Learning.双蒸馏鉴别器网络用于领域自适应少样本学习。
Neural Netw. 2023 Aug;165:625-633. doi: 10.1016/j.neunet.2023.06.009. Epub 2023 Jun 15.
3
Few-Shot Face Stylization via GAN Prior Distillation.通过GAN先验蒸馏实现少样本面部风格化
IEEE Trans Neural Netw Learn Syst. 2025 Mar;36(3):4492-4503. doi: 10.1109/TNNLS.2024.3377609. Epub 2025 Feb 28.
4
MCW: A Generalizable Deepfake Detection Method for Few-Shot Learning.MCW:一种适用于少样本学习的通用深度伪造检测方法。
Sensors (Basel). 2023 Oct 27;23(21):8763. doi: 10.3390/s23218763.
5
Hierarchical Knowledge Propagation and Distillation for Few-Shot Learning.用于少样本学习的分层知识传播与蒸馏
Neural Netw. 2023 Oct;167:615-625. doi: 10.1016/j.neunet.2023.08.040. Epub 2023 Sep 9.
6
A Few-shot learning approach for Monkeypox recognition from a cross-domain perspective.跨域视角下的猴痘识别的少样本学习方法。
J Biomed Inform. 2023 Aug;144:104449. doi: 10.1016/j.jbi.2023.104449. Epub 2023 Jul 22.
7
Few-shot disease recognition algorithm based on supervised contrastive learning.基于监督对比学习的少样本疾病识别算法
Front Plant Sci. 2024 Feb 7;15:1341831. doi: 10.3389/fpls.2024.1341831. eCollection 2024.
8
A difficulty-aware and task-augmentation method based on meta-learning model for few-shot diabetic retinopathy classification.一种基于元学习模型的少样本糖尿病视网膜病变分类的困难感知与任务增强方法。
Quant Imaging Med Surg. 2024 Jan 3;14(1):861-876. doi: 10.21037/qims-23-567. Epub 2024 Jan 2.
9
Cervical Cell Image Classification-Based Knowledge Distillation.基于宫颈细胞图像分类的知识蒸馏
Biomimetics (Basel). 2022 Nov 10;7(4):195. doi: 10.3390/biomimetics7040195.
10
Learning with few samples in deep learning for image classification, a mini-review.深度学习中用于图像分类的少样本学习:一篇综述
Front Comput Neurosci. 2023 Jan 5;16:1075294. doi: 10.3389/fncom.2022.1075294. eCollection 2022.

引用本文的文献

1
FIAEPI-KD: A novel knowledge distillation approach for precise detection of missing insulators in transmission lines.FIAEPI-KD:一种用于精确检测输电线路中缺失绝缘子的新型知识蒸馏方法。
PLoS One. 2025 May 30;20(5):e0324524. doi: 10.1371/journal.pone.0324524. eCollection 2025.

本文引用的文献

1
Retention Time Prediction through Learning from a Small Training Data Set with a Pretrained Graph Neural Network.通过使用预训练的图神经网络从小型训练数据集进行学习来预测保留时间。
Anal Chem. 2023 Nov 28;95(47):17273-17283. doi: 10.1021/acs.analchem.3c03177. Epub 2023 Nov 13.
2
A comparison of bias-adjusted generalized estimating equations for sparse binary data in small-sample longitudinal studies.在小样本纵向研究中,稀疏二项数据的偏差调整广义估计方程比较。
Stat Med. 2023 Jul 10;42(15):2711-2727. doi: 10.1002/sim.9744. Epub 2023 Apr 16.
3
Center transfer for supervised domain adaptation.
用于监督域适应的中心迁移
Appl Intell (Dordr). 2023 Jan 26:1-17. doi: 10.1007/s10489-022-04414-2.
4
Supervised Domain Adaptation: A Graph Embedding Perspective and a Rectified Experimental Protocol.监督域适应:一种图嵌入视角及修正的实验协议。
IEEE Trans Image Process. 2021;30:8619-8631. doi: 10.1109/TIP.2021.3118978. Epub 2021 Oct 20.