• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

基于频域学习表示中值的滤波器剪枝。

Filter Pruning via Learned Representation Median in the Frequency Domain.

出版信息

IEEE Trans Cybern. 2023 May;53(5):3165-3175. doi: 10.1109/TCYB.2021.3124284. Epub 2023 Apr 21.

DOI:10.1109/TCYB.2021.3124284
PMID:34797771
Abstract

In this article, we propose a novel filter pruning method for deep learning networks by calculating the learned representation median (RM) in frequency domain (LRMF). In contrast to the existing filter pruning methods that remove relatively unimportant filters in the spatial domain, our newly proposed approach emphasizes the removal of absolutely unimportant filters in the frequency domain. Through extensive experiments, we observed that the criterion for "relative unimportance" cannot be generalized well and that the discrete cosine transform (DCT) domain can eliminate redundancy and emphasize low-frequency representation, which is consistent with the human visual system. Based on these important observations, our LRMF calculates the learned RM in the frequency domain and removes its corresponding filter, since it is absolutely unimportant at each layer. Thanks to this, the time-consuming fine-tuning process is not required in LRMF. The results show that LRMF outperforms state-of-the-art pruning methods. For example, with ResNet110 on CIFAR-10, it achieves a 52.3% FLOPs reduction with an improvement of 0.04% in Top-1 accuracy. With VGG16 on CIFAR-100, it reduces FLOPs by 35.9% while increasing accuracy by 0.5%. On ImageNet, ResNet18 and ResNet50 are accelerated by 53.3% and 52.7% with only 1.76% and 0.8% accuracy loss, respectively. The code is based on PyTorch and is available at https://github.com/zhangxin-xd/LRMF.

摘要

在本文中,我们提出了一种新的基于深度学习网络的滤波剪枝方法,通过计算频域中的学习表示中位数(LRMF)。与现有的在空间域中去除相对不重要的滤波器的滤波剪枝方法不同,我们新提出的方法强调在频域中去除绝对不重要的滤波器。通过广泛的实验,我们观察到“相对不重要”的标准不能很好地推广,并且离散余弦变换(DCT)域可以消除冗余并强调低频表示,这与人类视觉系统一致。基于这些重要的观察结果,我们的 LRMF 在频域中计算学习的 RM,并去除其对应的滤波器,因为它在每一层都是绝对不重要的。由于这一点,LRMF 不需要耗时的微调过程。结果表明,LRMF 优于最新的剪枝方法。例如,在 CIFAR-10 上的 ResNet110,它的 FLOPs 减少了 52.3%,而 Top-1 准确率提高了 0.04%。在 CIFAR-100 上的 VGG16,它的 FLOPs 减少了 35.9%,而准确率提高了 0.5%。在 ImageNet 上,ResNet18 和 ResNet50 分别加速了 53.3%和 52.7%,而准确率仅损失了 1.76%和 0.8%。代码基于 PyTorch,并可在 https://github.com/zhangxin-xd/LRMF 上获得。

相似文献

1
Filter Pruning via Learned Representation Median in the Frequency Domain.基于频域学习表示中值的滤波器剪枝。
IEEE Trans Cybern. 2023 May;53(5):3165-3175. doi: 10.1109/TCYB.2021.3124284. Epub 2023 Apr 21.
2
Pruning Networks With Cross-Layer Ranking & k-Reciprocal Nearest Filters.基于跨层排序和k近邻互反滤波器的网络剪枝
IEEE Trans Neural Netw Learn Syst. 2023 Nov;34(11):9139-9148. doi: 10.1109/TNNLS.2022.3156047. Epub 2023 Oct 27.
3
REAF: Remembering Enhancement and Entropy-Based Asymptotic Forgetting for Filter Pruning.REAF:用于滤波器剪枝的基于记忆增强和信息熵渐近遗忘的方法。
IEEE Trans Image Process. 2023;32:3912-3923. doi: 10.1109/TIP.2023.3288986. Epub 2023 Jul 17.
4
FPWT: Filter pruning via wavelet transform for CNNs.FPWT:基于小波变换的 CNN 滤波器剪枝。
Neural Netw. 2024 Nov;179:106577. doi: 10.1016/j.neunet.2024.106577. Epub 2024 Jul 26.
5
Filter Sketch for Network Pruning.用于网络剪枝的滤波器草图
IEEE Trans Neural Netw Learn Syst. 2022 Dec;33(12):7091-7100. doi: 10.1109/TNNLS.2021.3084206. Epub 2022 Nov 30.
6
HRel: Filter pruning based on High Relevance between activation maps and class labels.HRel:基于激活图与类别标签之间的高相关性的滤波器修剪。
Neural Netw. 2022 Mar;147:186-197. doi: 10.1016/j.neunet.2021.12.017. Epub 2021 Dec 30.
7
Model pruning based on filter similarity for edge device deployment.基于滤波器相似度的模型剪枝用于边缘设备部署。
Front Neurorobot. 2023 Mar 2;17:1132679. doi: 10.3389/fnbot.2023.1132679. eCollection 2023.
8
Random pruning: channel sparsity by expectation scaling factor.随机剪枝:通过期望缩放因子实现通道稀疏性
PeerJ Comput Sci. 2023 Sep 5;9:e1564. doi: 10.7717/peerj-cs.1564. eCollection 2023.
9
Adding Before Pruning: Sparse Filter Fusion for Deep Convolutional Neural Networks via Auxiliary Attention.剪枝前添加:通过辅助注意力实现深度卷积神经网络的稀疏滤波器融合
IEEE Trans Neural Netw Learn Syst. 2025 Mar;36(3):3930-3942. doi: 10.1109/TNNLS.2021.3106917. Epub 2025 Feb 28.
10
Exploiting Sparse Self-Representation and Particle Swarm Optimization for CNN Compression.利用稀疏自表示和粒子群优化进行卷积神经网络压缩
IEEE Trans Neural Netw Learn Syst. 2023 Dec;34(12):10266-10278. doi: 10.1109/TNNLS.2022.3165530. Epub 2023 Nov 30.

引用本文的文献

1
Spectral-Spatial Feature Fusion for Hyperspectral Anomaly Detection.用于高光谱异常检测的光谱-空间特征融合
Sensors (Basel). 2024 Mar 3;24(5):1652. doi: 10.3390/s24051652.
2
Deep learning-based important weights-only transfer learning approach for COVID-19 CT-scan classification.基于深度学习的仅重要权重迁移学习方法用于COVID-19 CT扫描分类。
Appl Intell (Dordr). 2023;53(6):7201-7215. doi: 10.1007/s10489-022-03893-7. Epub 2022 Jul 18.