Suppr超能文献

基于频域学习表示中值的滤波器剪枝。

Filter Pruning via Learned Representation Median in the Frequency Domain.

出版信息

IEEE Trans Cybern. 2023 May;53(5):3165-3175. doi: 10.1109/TCYB.2021.3124284. Epub 2023 Apr 21.

Abstract

In this article, we propose a novel filter pruning method for deep learning networks by calculating the learned representation median (RM) in frequency domain (LRMF). In contrast to the existing filter pruning methods that remove relatively unimportant filters in the spatial domain, our newly proposed approach emphasizes the removal of absolutely unimportant filters in the frequency domain. Through extensive experiments, we observed that the criterion for "relative unimportance" cannot be generalized well and that the discrete cosine transform (DCT) domain can eliminate redundancy and emphasize low-frequency representation, which is consistent with the human visual system. Based on these important observations, our LRMF calculates the learned RM in the frequency domain and removes its corresponding filter, since it is absolutely unimportant at each layer. Thanks to this, the time-consuming fine-tuning process is not required in LRMF. The results show that LRMF outperforms state-of-the-art pruning methods. For example, with ResNet110 on CIFAR-10, it achieves a 52.3% FLOPs reduction with an improvement of 0.04% in Top-1 accuracy. With VGG16 on CIFAR-100, it reduces FLOPs by 35.9% while increasing accuracy by 0.5%. On ImageNet, ResNet18 and ResNet50 are accelerated by 53.3% and 52.7% with only 1.76% and 0.8% accuracy loss, respectively. The code is based on PyTorch and is available at https://github.com/zhangxin-xd/LRMF.

摘要

在本文中,我们提出了一种新的基于深度学习网络的滤波剪枝方法,通过计算频域中的学习表示中位数(LRMF)。与现有的在空间域中去除相对不重要的滤波器的滤波剪枝方法不同,我们新提出的方法强调在频域中去除绝对不重要的滤波器。通过广泛的实验,我们观察到“相对不重要”的标准不能很好地推广,并且离散余弦变换(DCT)域可以消除冗余并强调低频表示,这与人类视觉系统一致。基于这些重要的观察结果,我们的 LRMF 在频域中计算学习的 RM,并去除其对应的滤波器,因为它在每一层都是绝对不重要的。由于这一点,LRMF 不需要耗时的微调过程。结果表明,LRMF 优于最新的剪枝方法。例如,在 CIFAR-10 上的 ResNet110,它的 FLOPs 减少了 52.3%,而 Top-1 准确率提高了 0.04%。在 CIFAR-100 上的 VGG16,它的 FLOPs 减少了 35.9%,而准确率提高了 0.5%。在 ImageNet 上,ResNet18 和 ResNet50 分别加速了 53.3%和 52.7%,而准确率仅损失了 1.76%和 0.8%。代码基于 PyTorch,并可在 https://github.com/zhangxin-xd/LRMF 上获得。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验