• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

结构化交替方向乘子法(StructADMM):在深度神经网络的结构化剪枝中实现超高效率

StructADMM: Achieving Ultrahigh Efficiency in Structured Pruning for DNNs.

作者信息

Zhang Tianyun, Ye Shaokai, Feng Xiaoyu, Ma Xiaolong, Zhang Kaiqi, Li Zhengang, Tang Jian, Liu Sijia, Lin Xue, Liu Yongpan, Fardad Makan, Wang Yanzhi

出版信息

IEEE Trans Neural Netw Learn Syst. 2022 May;33(5):2259-2273. doi: 10.1109/TNNLS.2020.3045153. Epub 2022 May 2.

DOI:10.1109/TNNLS.2020.3045153
PMID:33587706
Abstract

Weight pruning methods of deep neural networks (DNNs) have been demonstrated to achieve a good model pruning rate without loss of accuracy, thereby alleviating the significant computation/storage requirements of large-scale DNNs. Structured weight pruning methods have been proposed to overcome the limitation of irregular network structure and demonstrated actual GPU acceleration. However, in prior work, the pruning rate (degree of sparsity) and GPU acceleration are limited (to less than 50%) when accuracy needs to be maintained. In this work, we overcome these limitations by proposing a unified, systematic framework of structured weight pruning for DNNs. It is a framework that can be used to induce different types of structured sparsity, such as filterwise, channelwise, and shapewise sparsity, as well as nonstructured sparsity. The proposed framework incorporates stochastic gradient descent (SGD; or ADAM) with alternating direction method of multipliers (ADMM) and can be understood as a dynamic regularization method in which the regularization target is analytically updated in each iteration. Leveraging special characteristics of ADMM, we further propose a progressive, multistep weight pruning framework and a network purification and unused path removal procedure, in order to achieve higher pruning rate without accuracy loss. Without loss of accuracy on the AlexNet model, we achieve 2.58× and 3.65× average measured speedup on two GPUs, clearly outperforming the prior work. The average speedups reach 3.15× and 8.52× when allowing a moderate accuracy loss of 2%. In this case, the model compression for convolutional layers is 15.0× , corresponding to 11.93× measured CPU speedup. As another example, for the ResNet-18 model on the CIFAR-10 data set, we achieve an unprecedented 54.2× structured pruning rate on CONV layers. This is 32× higher pruning rate compared with recent work and can further translate into 7.6× inference time speedup on the Adreno 640 mobile GPU compared with the original, unpruned DNN model. We share our codes and models at the link http://bit.ly/2M0V7DO.

摘要

深度神经网络(DNN)的权重剪枝方法已被证明能在不损失准确性的情况下实现良好的模型剪枝率,从而缓解大规模DNN对计算/存储的巨大需求。为克服不规则网络结构的局限性,人们提出了结构化权重剪枝方法,并证明其能实现实际的GPU加速。然而,在先前的工作中,当需要保持准确性时,剪枝率(稀疏度)和GPU加速受到限制(低于50%)。在这项工作中,我们通过为DNN提出一个统一、系统的结构化权重剪枝框架来克服这些局限性。这是一个可用于引入不同类型结构化稀疏性的框架,如按滤波器、按通道和按形状的稀疏性,以及非结构化稀疏性。所提出的框架将随机梯度下降(SGD;或ADAM)与交替方向乘子法(ADMM)相结合,可理解为一种动态正则化方法,其中正则化目标在每次迭代中进行解析更新。利用ADMM的特殊特性,我们进一步提出了一种渐进式、多步权重剪枝框架以及网络净化和未使用路径移除过程,以在不损失准确性的情况下实现更高的剪枝率。在AlexNet模型上不损失准确性的情况下,我们在两块GPU上实现了2.58倍和3.65倍的平均实测加速比,明显优于先前的工作。当允许有2%的适度准确性损失时,平均加速比达到3.15倍和8.52倍。在这种情况下,卷积层的模型压缩率为15.0倍,对应实测CPU加速比为11.93倍。再举一个例子,对于CIFAR - 10数据集上的ResNet - 18模型,我们在CONV层实现了前所未有的54.2倍结构化剪枝率。这比近期工作的剪枝率高32倍,与原始未剪枝的DNN模型相比,在Adreno 640移动GPU上可进一步转化为7.6倍的推理时间加速比。我们在链接http://bit.ly/2M0V7DO分享我们的代码和模型。

相似文献

1
StructADMM: Achieving Ultrahigh Efficiency in Structured Pruning for DNNs.结构化交替方向乘子法(StructADMM):在深度神经网络的结构化剪枝中实现超高效率
IEEE Trans Neural Netw Learn Syst. 2022 May;33(5):2259-2273. doi: 10.1109/TNNLS.2020.3045153. Epub 2022 May 2.
2
Non-Structured DNN Weight Pruning-Is It Beneficial in Any Platform?非结构化深度神经网络权重剪枝——在任何平台上都有益吗?
IEEE Trans Neural Netw Learn Syst. 2022 Sep;33(9):4930-4944. doi: 10.1109/TNNLS.2021.3063265. Epub 2022 Aug 31.
3
Reweighted Alternating Direction Method of Multipliers for DNN weight pruning.基于重加权交替方向乘子法的 DNN 权值剪枝。
Neural Netw. 2024 Nov;179:106534. doi: 10.1016/j.neunet.2024.106534. Epub 2024 Jul 14.
4
Feature flow regularization: Improving structured sparsity in deep neural networks.特征流正则化:改善深度神经网络中的结构化稀疏性。
Neural Netw. 2023 Apr;161:598-613. doi: 10.1016/j.neunet.2023.02.013. Epub 2023 Feb 13.
5
GRIM: A General, Real-Time Deep Learning Inference Framework for Mobile Devices Based on Fine-Grained Structured Weight Sparsity.GRIM:一种基于细粒度结构化权重稀疏化的用于移动设备的通用、实时深度学习推理框架。
IEEE Trans Pattern Anal Mach Intell. 2022 Oct;44(10):6224-6239. doi: 10.1109/TPAMI.2021.3089687. Epub 2022 Sep 14.
6
Toward Compact ConvNets via Structure-Sparsity Regularized Filter Pruning.通过结构稀疏正则化滤波器剪枝实现紧凑卷积神经网络
IEEE Trans Neural Netw Learn Syst. 2020 Feb;31(2):574-588. doi: 10.1109/TNNLS.2019.2906563. Epub 2019 Apr 12.
7
LAP: Latency-aware automated pruning with dynamic-based filter selection.LAP:基于动态滤波器选择的延迟感知自动剪枝
Neural Netw. 2022 Aug;152:407-418. doi: 10.1016/j.neunet.2022.05.002. Epub 2022 May 10.
8
Weak sub-network pruning for strong and efficient neural networks.弱子网络剪枝技术:构建强大而高效的神经网络
Neural Netw. 2021 Dec;144:614-626. doi: 10.1016/j.neunet.2021.09.015. Epub 2021 Sep 30.
9
Discrimination-Aware Network Pruning for Deep Model Compression.面向深度模型压缩的歧视感知网络剪枝。
IEEE Trans Pattern Anal Mach Intell. 2022 Aug;44(8):4035-4051. doi: 10.1109/TPAMI.2021.3066410. Epub 2022 Jul 1.
10
PCA driven mixed filter pruning for efficient convNets.基于 PCA 的混合滤波器剪枝算法在高效卷积神经网络中的应用。
PLoS One. 2022 Jan 24;17(1):e0262386. doi: 10.1371/journal.pone.0262386. eCollection 2022.

引用本文的文献

1
GAT TransPruning: progressive channel pruning strategy combining graph attention network and transformer.GAT TransPruning:结合图注意力网络和Transformer的渐进式通道剪枝策略
PeerJ Comput Sci. 2024 Apr 23;10:e2012. doi: 10.7717/peerj-cs.2012. eCollection 2024.
2
Efficient Layer-Wise : Sparse CNN Accelerator with Flexible SPEC: Sparse Processing Element Clusters.高效逐层:具有灵活SPEC的稀疏卷积神经网络加速器:稀疏处理元素集群
Micromachines (Basel). 2023 Feb 24;14(3):528. doi: 10.3390/mi14030528.
3
A Novel Deep-Learning Model Compression Based on Filter-Stripe Group Pruning and Its IoT Application.
一种基于滤波带分组剪枝的新型深度学习模型压缩及其在物联网中的应用。
Sensors (Basel). 2022 Jul 27;22(15):5623. doi: 10.3390/s22155623.