• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

Expanding and Refining Hybrid Compressors for Efficient Object Re-Identification.

作者信息

Xie Yi, Wu Hanxiao, Zhu Jianqing, Zeng Huanqiang, Zhang Jing

出版信息

IEEE Trans Image Process. 2024;33:3793-3808. doi: 10.1109/TIP.2024.3410684. Epub 2024 Jun 19.

DOI:10.1109/TIP.2024.3410684
PMID:38865219
Abstract

Recent object re-identification (Re-ID) methods gain high efficiency via lightweight student models trained by knowledge distillation (KD). However, the huge architectural difference between lightweight students and heavy teachers causes students to have difficulties in receiving and understanding teachers' knowledge, thus losing certain accuracy. To this end, we propose a refiner-expander-refiner (RER) structure to enlarge a student's representational capacity and prune the student's complexity. The expander is a multi-branch convolutional layer to expand the student's representational capacity to understand a teacher's knowledge comprehensively, which does not require any feature-dimensional adapter to avoid knowledge distortions. The two refiners are 1×1 convolutional layers to prune the input and output channels of the expander. In addition, in order to alleviate the competition accuracy-related and pruning-related gradients, we design a common consensus gradient resetting (CCGR) method, which discards unimportant channels according to the intersection of each sample's unimportant channel judgment. Finally, the trained RER can be simplified into a slim convolutional layer via re-parameterization to speed up inference. As a result, we propose an expanding and refining hybrid compressing (ERHC) method. Extensive experiments show that our ERHC has superior inference speed and accuracy, e.g., on the VeRi-776 dataset, given the ResNet101 as a teacher, ERHC saves 75.33% model parameters (MP) and 74.29% floating-point of operations (FLOPs) without sacrificing accuracy.

摘要

相似文献

1
Expanding and Refining Hybrid Compressors for Efficient Object Re-Identification.
IEEE Trans Image Process. 2024;33:3793-3808. doi: 10.1109/TIP.2024.3410684. Epub 2024 Jun 19.
2
Research on a lightweight electronic component detection method based on knowledge distillation.基于知识蒸馏的轻量级电子元件检测方法研究
Math Biosci Eng. 2023 Nov 22;20(12):20971-20994. doi: 10.3934/mbe.2023928.
3
Multi-teacher knowledge distillation based on joint Guidance of Probe and Adaptive Corrector.基于探针和自适应校正器联合引导的多教师知识蒸馏。
Neural Netw. 2023 Jul;164:345-356. doi: 10.1016/j.neunet.2023.04.015. Epub 2023 Apr 26.
4
Knowledge Transfer via Decomposing Essential Information in Convolutional Neural Networks.通过分解卷积神经网络中的基本信息进行知识转移。
IEEE Trans Neural Netw Learn Syst. 2022 Jan;33(1):366-377. doi: 10.1109/TNNLS.2020.3027837. Epub 2022 Jan 5.
5
Gender differences in teachers' perceptions of students' temperament, educational competence, and teachability.教师对学生气质、教育能力和可教性的感知存在性别差异。
Br J Educ Psychol. 2012 Jun;82(Pt 2):185-206. doi: 10.1111/j.2044-8279.2010.02017.x. Epub 2011 Jan 17.
6
Improving Knowledge Distillation With a Customized Teacher.使用定制教师改进知识蒸馏
IEEE Trans Neural Netw Learn Syst. 2024 Feb;35(2):2290-2299. doi: 10.1109/TNNLS.2022.3189680. Epub 2024 Feb 5.
7
Distilling Knowledge by Mimicking Features.特征模仿蒸馏知识。
IEEE Trans Pattern Anal Mach Intell. 2022 Nov;44(11):8183-8195. doi: 10.1109/TPAMI.2021.3103973. Epub 2022 Oct 4.
8
RamanCMP: A Raman spectral classification acceleration method based on lightweight model and model compression techniques.拉曼CMP:一种基于轻量级模型和模型压缩技术的拉曼光谱分类加速方法。
Anal Chim Acta. 2023 Oct 16;1278:341758. doi: 10.1016/j.aca.2023.341758. Epub 2023 Aug 28.
9
ResKD: Residual-Guided Knowledge Distillation.ResKD:残差引导知识蒸馏
IEEE Trans Image Process. 2021;30:4735-4746. doi: 10.1109/TIP.2021.3066051. Epub 2021 May 5.
10
Where to Prune: Using LSTM to Guide Data-Dependent Soft Pruning.修剪位置:使用长短期记忆网络指导数据依赖的软修剪
IEEE Trans Image Process. 2021;30:293-304. doi: 10.1109/TIP.2020.3035028. Epub 2020 Nov 24.