• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

REAF:用于滤波器剪枝的基于记忆增强和信息熵渐近遗忘的方法。

REAF: Remembering Enhancement and Entropy-Based Asymptotic Forgetting for Filter Pruning.

出版信息

IEEE Trans Image Process. 2023;32:3912-3923. doi: 10.1109/TIP.2023.3288986. Epub 2023 Jul 17.

DOI:10.1109/TIP.2023.3288986
PMID:37436852
Abstract

Neurologically, filter pruning is a procedure of forgetting and remembering recovering. Prevailing methods directly forget less important information from an unrobust baseline at first and expect to minimize the performance sacrifice. However, unsaturated base remembering imposes a ceiling on the slimmed model leading to suboptimal performance. And significantly forgetting at first would cause unrecoverable information loss. Here, we design a novel filter pruning paradigm termed Remembering Enhancement and Entropy-based Asymptotic Forgetting (REAF). Inspired by robustness theory, we first enhance remembering by over-parameterizing baseline with fusible compensatory convolutions which liberates pruned model from the bondage of baseline at no inference cost. Then the collateral implication between original and compensatory filters necessitates a bilateral-collaborated pruning criterion. Specifically, only when the filter has the largest intra-branch distance and its compensatory counterpart has the strongest remembering enhancement power, they are preserved. Further, Ebbinghaus curve-based asymptotic forgetting is proposed to protect the pruned model from unstable learning. The number of pruned filters is increasing asymptotically in the training procedure, which enables the remembering of pretrained weights gradually to be concentrated in the remaining filters. Extensive experiments demonstrate the superiority of REAF over many state-of-the-art (SOTA) methods. For example, REAF removes 47.55% FLOPs and 42.98% parameters of ResNet-50 only with 0.98% TOP-1 accuracy loss on ImageNet. The code is available at https://github.com/zhangxin-xd/REAF.

摘要

从神经学角度来看,滤波器修剪是一种遗忘和记忆恢复的过程。目前的主流方法首先直接从不稳定的基线中删除不太重要的信息,并期望最小化性能损失。然而,不饱和的基础记忆会给瘦身模型设定一个上限,导致性能不理想。而且,如果一开始就大量遗忘,就会导致不可恢复的信息丢失。在这里,我们设计了一种新的滤波器修剪范式,称为记忆增强和基于熵的渐近遗忘(REAF)。受稳健性理论的启发,我们首先通过可融合的补偿卷积对基线进行过度参数化,从而使修剪后的模型在不增加推理成本的情况下摆脱基线的束缚。然后,原始滤波器和补偿滤波器之间的关联需要一个双边协作的修剪标准。具体来说,只有当滤波器具有最大的分支内距离,并且其补偿滤波器具有最强的记忆增强能力时,它们才会被保留。此外,还提出了基于艾宾浩斯曲线的渐近遗忘来保护修剪后的模型免受不稳定学习的影响。在训练过程中,修剪后的滤波器数量呈渐近增加,这使得预训练权重的记忆逐渐集中在剩余的滤波器中。大量实验表明,REAF 优于许多最先进的(SOTA)方法。例如,REAF 在 ImageNet 上仅将 ResNet-50 的 FLOPs 和参数减少了 47.55%和 42.98%,而 TOP-1 准确率仅损失了 0.98%。代码可在 https://github.com/zhangxin-xd/REAF 上获得。

相似文献

1
REAF: Remembering Enhancement and Entropy-Based Asymptotic Forgetting for Filter Pruning.REAF:用于滤波器剪枝的基于记忆增强和信息熵渐近遗忘的方法。
IEEE Trans Image Process. 2023;32:3912-3923. doi: 10.1109/TIP.2023.3288986. Epub 2023 Jul 17.
2
Asymptotic Soft Filter Pruning for Deep Convolutional Neural Networks.渐进式软滤波器剪枝在深度卷积神经网络中的应用。
IEEE Trans Cybern. 2020 Aug;50(8):3594-3604. doi: 10.1109/TCYB.2019.2933477. Epub 2019 Aug 27.
3
Pruning Networks With Cross-Layer Ranking & k-Reciprocal Nearest Filters.基于跨层排序和k近邻互反滤波器的网络剪枝
IEEE Trans Neural Netw Learn Syst. 2023 Nov;34(11):9139-9148. doi: 10.1109/TNNLS.2022.3156047. Epub 2023 Oct 27.
4
HRel: Filter pruning based on High Relevance between activation maps and class labels.HRel:基于激活图与类别标签之间的高相关性的滤波器修剪。
Neural Netw. 2022 Mar;147:186-197. doi: 10.1016/j.neunet.2021.12.017. Epub 2021 Dec 30.
5
Filter Sketch for Network Pruning.用于网络剪枝的滤波器草图
IEEE Trans Neural Netw Learn Syst. 2022 Dec;33(12):7091-7100. doi: 10.1109/TNNLS.2021.3084206. Epub 2022 Nov 30.
6
Filter Pruning via Learned Representation Median in the Frequency Domain.基于频域学习表示中值的滤波器剪枝。
IEEE Trans Cybern. 2023 May;53(5):3165-3175. doi: 10.1109/TCYB.2021.3124284. Epub 2023 Apr 21.
7
Block-Wise Partner Learning for Model Compression.
IEEE Trans Neural Netw Learn Syst. 2024 Dec;35(12):17582-17595. doi: 10.1109/TNNLS.2023.3306512. Epub 2024 Dec 2.
8
Training Compact CNNs for Image Classification Using Dynamic-Coded Filter Fusion.使用动态编码滤波器融合对图像分类进行训练紧凑型卷积神经网络。
IEEE Trans Pattern Anal Mach Intell. 2023 Aug;45(8):10478-10487. doi: 10.1109/TPAMI.2023.3259402. Epub 2023 Jun 30.
9
Fast Filter Pruning via Coarse-to-Fine Neural Architecture Search and Contrastive Knowledge Transfer.通过从粗到细的神经架构搜索和对比知识转移实现快速滤波器剪枝
IEEE Trans Neural Netw Learn Syst. 2024 Jul;35(7):9674-9685. doi: 10.1109/TNNLS.2023.3236336. Epub 2024 Jul 8.
10
Filter Pruning by Switching to Neighboring CNNs With Good Attributes.通过切换到具有良好属性的相邻卷积神经网络进行滤波器剪枝。
IEEE Trans Neural Netw Learn Syst. 2023 Oct;34(10):8044-8056. doi: 10.1109/TNNLS.2022.3149332. Epub 2023 Oct 5.