• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

通过协作式知识蒸馏实现高效图像分类:一种新颖的AlexNet改进方法。

Efficient image classification through collaborative knowledge distillation: A novel AlexNet modification approach.

作者信息

Kuldashboy Avazov, Umirzakova Sabina, Allaberdiev Sharofiddin, Nasimov Rashid, Abdusalomov Akmalbek, Cho Young Im

机构信息

Department of Computer Engineering, Gachon University Sujeong-Gu, Seongnam-Si, Gyeonggi-Do, 461-701, Republic of Korea.

Department College of Computer Science and Software Engineering, Shenzhen University, Shenzhen 518060, China.

出版信息

Heliyon. 2024 Jul 14;10(14):e34376. doi: 10.1016/j.heliyon.2024.e34376. eCollection 2024 Jul 30.

DOI:10.1016/j.heliyon.2024.e34376
PMID:39113984
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC11305255/
Abstract

This paper introduces an innovative image classification technique utilizing knowledge distillation, tailored for a lightweight model structure. The core of the approach is a modified version of the AlexNet architecture, enhanced with depthwise-separable convolution layers. A unique aspect of this work is the Teacher-Student Collaborative Knowledge Distillation (TSKD) method. Unlike conventional knowledge distillation techniques, TSKD employs a dual-layered learning strategy, where the student model learns from both the final output and the intermediate layers of the teacher model. This collaborative learning approach enables the student model to actively engage in the learning process, resulting in more efficient knowledge transfer. The paper emphasizes the model suitability for scenarios with limited computational resources. This is achieved through architectural optimizations and the introduction of specialized loss functions, which balance the trade-off between model complexity and computational efficiency. The study demonstrates that despite its lightweight nature, the model maintains high accuracy and robustness in image classification tasks. Key contributions of the paper include the innovative use of depthwise-separable convolution in AlexNet, the TSKD approach for enhanced knowledge transfer, and the development of unique loss functions. These advancements collectively contribute to the model effectiveness in environments with computational constraints, making it a valuable contribution to the field of image classification.

摘要

本文介绍了一种创新的图像分类技术,该技术利用知识蒸馏,专为轻量级模型结构量身定制。该方法的核心是AlexNet架构的改进版本,通过深度可分离卷积层进行了增强。这项工作的一个独特之处是师生协作知识蒸馏(TSKD)方法。与传统的知识蒸馏技术不同,TSKD采用双层学习策略,其中学生模型从教师模型的最终输出和中间层进行学习。这种协作学习方法使学生模型能够积极参与学习过程,从而实现更高效的知识转移。本文强调了该模型适用于计算资源有限的场景。这是通过架构优化和引入专门的损失函数来实现的,这些损失函数平衡了模型复杂性和计算效率之间的权衡。研究表明,尽管该模型具有轻量级的特性,但在图像分类任务中仍保持较高的准确性和鲁棒性。本文的主要贡献包括在AlexNet中创新性地使用深度可分离卷积、用于增强知识转移的TSKD方法以及独特损失函数的开发。这些进展共同促进了模型在计算受限环境中的有效性,使其成为图像分类领域的一项有价值的贡献。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/210c/11305255/54bc00f789c2/gr3.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/210c/11305255/f155bbb46cc2/gr1.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/210c/11305255/fe156aaab671/gr2.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/210c/11305255/54bc00f789c2/gr3.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/210c/11305255/f155bbb46cc2/gr1.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/210c/11305255/fe156aaab671/gr2.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/210c/11305255/54bc00f789c2/gr3.jpg

相似文献

1
Efficient image classification through collaborative knowledge distillation: A novel AlexNet modification approach.通过协作式知识蒸馏实现高效图像分类:一种新颖的AlexNet改进方法。
Heliyon. 2024 Jul 14;10(14):e34376. doi: 10.1016/j.heliyon.2024.e34376. eCollection 2024 Jul 30.
2
Joint learning method with teacher-student knowledge distillation for on-device breast cancer image classification.基于师生知识蒸馏的联合学习方法在设备端乳腺癌图像分类中的应用。
Comput Biol Med. 2023 Mar;155:106476. doi: 10.1016/j.compbiomed.2022.106476. Epub 2022 Dec 24.
3
Relation Knowledge Distillation by Auxiliary Learning for Object Detection.用于目标检测的基于辅助学习的关系知识蒸馏
IEEE Trans Image Process. 2024;33:4796-4810. doi: 10.1109/TIP.2024.3445740. Epub 2024 Aug 30.
4
DSP-KD: Dual-Stage Progressive Knowledge Distillation for Skin Disease Classification.DSP-KD:用于皮肤病分类的双阶段渐进式知识蒸馏
Bioengineering (Basel). 2024 Jan 10;11(1):70. doi: 10.3390/bioengineering11010070.
5
Light-M: An efficient lightweight medical image segmentation framework for resource-constrained IoMT.轻量级医学图像分割框架 Light-M,适用于资源受限的 IoMT。
Comput Biol Med. 2024 Mar;170:108088. doi: 10.1016/j.compbiomed.2024.108088. Epub 2024 Feb 3.
6
LHAR: Lightweight Human Activity Recognition on Knowledge Distillation.基于知识蒸馏的轻量化人体活动识别
IEEE J Biomed Health Inform. 2024 Nov;28(11):6318-6328. doi: 10.1109/JBHI.2023.3298932. Epub 2024 Nov 6.
7
Resolution-based distillation for efficient histology image classification.基于分辨率的蒸馏用于高效的组织学图像分类。
Artif Intell Med. 2021 Sep;119:102136. doi: 10.1016/j.artmed.2021.102136. Epub 2021 Aug 6.
8
Learning lightweight tea detector with reconstructed feature and dual distillation.基于重建特征和双重蒸馏的轻量化茶检测器学习。
Sci Rep. 2024 Oct 10;14(1):23669. doi: 10.1038/s41598-024-73674-4.
9
Leveraging different learning styles for improved knowledge distillation in biomedical imaging.利用不同的学习方式提高生物医学成像中的知识蒸馏效果。
Comput Biol Med. 2024 Jan;168:107764. doi: 10.1016/j.compbiomed.2023.107764. Epub 2023 Nov 30.
10
A lightweight speech recognition method with target-swap knowledge distillation for Mandarin air traffic control communications.一种用于普通话空中交通管制通信的具有目标交换知识蒸馏的轻量级语音识别方法。
PeerJ Comput Sci. 2023 Nov 1;9:e1650. doi: 10.7717/peerj-cs.1650. eCollection 2023.

引用本文的文献

1
The AlexNet HSD model for industrial heritage damage detection and adaptive reuse under artificial intelligence.用于人工智能下工业遗产损伤检测与适应性再利用的AlexNet HSD模型
Sci Rep. 2025 Jul 19;15(1):26289. doi: 10.1038/s41598-025-12257-3.
2
Enhanced AlexNet with Gabor and Local Binary Pattern Features for Improved Facial Emotion Recognition.用于改进面部表情识别的具有Gabor和局部二值模式特征的增强型AlexNet
Sensors (Basel). 2025 Jun 19;25(12):3832. doi: 10.3390/s25123832.
3
Recognizing American Sign Language gestures efficiently and accurately using a hybrid transformer model.

本文引用的文献

1
Enhancing Medical Image Denoising with Innovative Teacher-Student Model-Based Approaches for Precision Diagnostics.创新的基于师生模型的方法增强医学图像去噪,以实现精准诊断。
Sensors (Basel). 2023 Nov 29;23(23):9502. doi: 10.3390/s23239502.
2
Joint learning method with teacher-student knowledge distillation for on-device breast cancer image classification.基于师生知识蒸馏的联合学习方法在设备端乳腺癌图像分类中的应用。
Comput Biol Med. 2023 Mar;155:106476. doi: 10.1016/j.compbiomed.2022.106476. Epub 2022 Dec 24.
3
SSD-KD: A self-supervised diverse knowledge distillation method for lightweight skin lesion classification using dermoscopic images.
使用混合变压器模型高效准确地识别美国手语手势。
Sci Rep. 2025 Jun 23;15(1):20253. doi: 10.1038/s41598-025-06344-8.
4
Degradation Type-Aware Image Restoration for Effective Object Detection in Adverse Weather.用于恶劣天气下有效目标检测的降解类型感知图像恢复
Sensors (Basel). 2024 Sep 30;24(19):6330. doi: 10.3390/s24196330.
SSD-KD:一种基于自监督的多样化知识蒸馏方法,用于使用皮肤镜图像进行轻量级皮肤病变分类。
Med Image Anal. 2023 Feb;84:102693. doi: 10.1016/j.media.2022.102693. Epub 2022 Nov 13.