• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

为在深度复杂卷积神经网络上进行高效剪枝而操纵相同滤波器冗余

Manipulating Identical Filter Redundancy for Efficient Pruning on Deep and Complicated CNN.

作者信息

Hao Tianxiang, Ding Xiaohan, Han Jungong, Guo Yuchen, Ding Guiguang

出版信息

IEEE Trans Neural Netw Learn Syst. 2024 Nov;35(11):16831-16844. doi: 10.1109/TNNLS.2023.3298263. Epub 2024 Oct 29.

DOI:10.1109/TNNLS.2023.3298263
PMID:37824319
Abstract

The existence of redundancy in convolutional neural networks (CNNs) enables us to remove some filters/channels with acceptable performance drops. However, the training objective of CNNs usually tends to minimize an accuracy-related loss function without any attention paid to the redundancy, making the redundancy distribute randomly on all the filters, such that removing any of them may trigger information loss and accuracy drop, necessitating a fine-tuning step for recovery. In this article, we propose to manipulate the redundancy during training to facilitate network pruning. To this end, we propose a novel centripetal SGD (C-SGD) to make some filters identical, resulting in ideal redundancy patterns, as such filters become purely redundant due to their duplicates, hence removing them does not harm the network. As shown on CIFAR and ImageNet, C-SGD delivers better performance because the redundancy is better organized, compared to the existing methods. The efficiency also characterizes C-SGD because it is as fast as regular SGD, requires no fine-tuning, and can be conducted simultaneously on all the layers even in very deep CNNs. Besides, C-SGD can improve the accuracy of CNNs by first training a model with the same architecture but wider layers and then squeezing it into the original width.

摘要

卷积神经网络(CNN)中冗余的存在使我们能够在可接受的性能下降情况下移除一些滤波器/通道。然而,CNN的训练目标通常倾向于最小化与准确率相关的损失函数,而不关注冗余情况,这使得冗余随机分布在所有滤波器上,以至于移除其中任何一个都可能引发信息损失和准确率下降,因此需要一个微调步骤来恢复。在本文中,我们提议在训练期间操纵冗余以促进网络剪枝。为此,我们提出一种新颖的向心随机梯度下降(C-SGD)方法,使一些滤波器变得相同,从而产生理想的冗余模式,因为这些滤波器由于重复而变得纯粹冗余,所以移除它们不会损害网络。如在CIFAR和ImageNet数据集上所示,与现有方法相比,C-SGD由于冗余得到了更好的组织,因而具有更好的性能。C-SGD的效率也很突出,因为它与常规随机梯度下降一样快,无需微调,并且即使在非常深的CNN中也能在所有层上同时进行。此外,C-SGD可以通过首先训练一个具有相同架构但更宽层的模型,然后将其压缩到原始宽度来提高CNN的准确率。

相似文献

1
Manipulating Identical Filter Redundancy for Efficient Pruning on Deep and Complicated CNN.为在深度复杂卷积神经网络上进行高效剪枝而操纵相同滤波器冗余
IEEE Trans Neural Netw Learn Syst. 2024 Nov;35(11):16831-16844. doi: 10.1109/TNNLS.2023.3298263. Epub 2024 Oct 29.
2
Adding Before Pruning: Sparse Filter Fusion for Deep Convolutional Neural Networks via Auxiliary Attention.剪枝前添加:通过辅助注意力实现深度卷积神经网络的稀疏滤波器融合
IEEE Trans Neural Netw Learn Syst. 2025 Mar;36(3):3930-3942. doi: 10.1109/TNNLS.2021.3106917. Epub 2025 Feb 28.
3
Hierarchical Threshold Pruning Based on Uniform Response Criterion.基于均匀响应准则的分层阈值修剪
IEEE Trans Neural Netw Learn Syst. 2024 Aug;35(8):10869-10881. doi: 10.1109/TNNLS.2023.3244994. Epub 2024 Aug 5.
4
Weak sub-network pruning for strong and efficient neural networks.弱子网络剪枝技术:构建强大而高效的神经网络
Neural Netw. 2021 Dec;144:614-626. doi: 10.1016/j.neunet.2021.09.015. Epub 2021 Sep 30.
5
Asymptotic Soft Filter Pruning for Deep Convolutional Neural Networks.渐进式软滤波器剪枝在深度卷积神经网络中的应用。
IEEE Trans Cybern. 2020 Aug;50(8):3594-3604. doi: 10.1109/TCYB.2019.2933477. Epub 2019 Aug 27.
6
Redundant feature pruning for accelerated inference in deep neural networks.冗余特征剪枝在深度神经网络中的加速推理。
Neural Netw. 2019 Oct;118:148-158. doi: 10.1016/j.neunet.2019.04.021. Epub 2019 May 9.
7
Self-grouping convolutional neural networks.自组织卷积神经网络。
Neural Netw. 2020 Dec;132:491-505. doi: 10.1016/j.neunet.2020.09.015. Epub 2020 Sep 17.
8
Fast Filter Pruning via Coarse-to-Fine Neural Architecture Search and Contrastive Knowledge Transfer.通过从粗到细的神经架构搜索和对比知识转移实现快速滤波器剪枝
IEEE Trans Neural Netw Learn Syst. 2024 Jul;35(7):9674-9685. doi: 10.1109/TNNLS.2023.3236336. Epub 2024 Jul 8.
9
CNNPruner: Pruning Convolutional Neural Networks with Visual Analytics.CNNPruner:通过可视化分析修剪卷积神经网络
IEEE Trans Vis Comput Graph. 2021 Feb;27(2):1364-1373. doi: 10.1109/TVCG.2020.3030461. Epub 2021 Jan 28.
10
Towards performance-maximizing neural network pruning via global channel attention.通过全局通道注意力实现性能最大化的神经网络剪枝。
Neural Netw. 2024 Mar;171:104-113. doi: 10.1016/j.neunet.2023.11.065. Epub 2023 Dec 1.