Suppr超能文献

结构化交替方向乘子法(StructADMM):在深度神经网络的结构化剪枝中实现超高效率

StructADMM: Achieving Ultrahigh Efficiency in Structured Pruning for DNNs.

作者信息

Zhang Tianyun, Ye Shaokai, Feng Xiaoyu, Ma Xiaolong, Zhang Kaiqi, Li Zhengang, Tang Jian, Liu Sijia, Lin Xue, Liu Yongpan, Fardad Makan, Wang Yanzhi

出版信息

IEEE Trans Neural Netw Learn Syst. 2022 May;33(5):2259-2273. doi: 10.1109/TNNLS.2020.3045153. Epub 2022 May 2.

Abstract

Weight pruning methods of deep neural networks (DNNs) have been demonstrated to achieve a good model pruning rate without loss of accuracy, thereby alleviating the significant computation/storage requirements of large-scale DNNs. Structured weight pruning methods have been proposed to overcome the limitation of irregular network structure and demonstrated actual GPU acceleration. However, in prior work, the pruning rate (degree of sparsity) and GPU acceleration are limited (to less than 50%) when accuracy needs to be maintained. In this work, we overcome these limitations by proposing a unified, systematic framework of structured weight pruning for DNNs. It is a framework that can be used to induce different types of structured sparsity, such as filterwise, channelwise, and shapewise sparsity, as well as nonstructured sparsity. The proposed framework incorporates stochastic gradient descent (SGD; or ADAM) with alternating direction method of multipliers (ADMM) and can be understood as a dynamic regularization method in which the regularization target is analytically updated in each iteration. Leveraging special characteristics of ADMM, we further propose a progressive, multistep weight pruning framework and a network purification and unused path removal procedure, in order to achieve higher pruning rate without accuracy loss. Without loss of accuracy on the AlexNet model, we achieve 2.58× and 3.65× average measured speedup on two GPUs, clearly outperforming the prior work. The average speedups reach 3.15× and 8.52× when allowing a moderate accuracy loss of 2%. In this case, the model compression for convolutional layers is 15.0× , corresponding to 11.93× measured CPU speedup. As another example, for the ResNet-18 model on the CIFAR-10 data set, we achieve an unprecedented 54.2× structured pruning rate on CONV layers. This is 32× higher pruning rate compared with recent work and can further translate into 7.6× inference time speedup on the Adreno 640 mobile GPU compared with the original, unpruned DNN model. We share our codes and models at the link http://bit.ly/2M0V7DO.

摘要

深度神经网络(DNN)的权重剪枝方法已被证明能在不损失准确性的情况下实现良好的模型剪枝率,从而缓解大规模DNN对计算/存储的巨大需求。为克服不规则网络结构的局限性,人们提出了结构化权重剪枝方法,并证明其能实现实际的GPU加速。然而,在先前的工作中,当需要保持准确性时,剪枝率(稀疏度)和GPU加速受到限制(低于50%)。在这项工作中,我们通过为DNN提出一个统一、系统的结构化权重剪枝框架来克服这些局限性。这是一个可用于引入不同类型结构化稀疏性的框架,如按滤波器、按通道和按形状的稀疏性,以及非结构化稀疏性。所提出的框架将随机梯度下降(SGD;或ADAM)与交替方向乘子法(ADMM)相结合,可理解为一种动态正则化方法,其中正则化目标在每次迭代中进行解析更新。利用ADMM的特殊特性,我们进一步提出了一种渐进式、多步权重剪枝框架以及网络净化和未使用路径移除过程,以在不损失准确性的情况下实现更高的剪枝率。在AlexNet模型上不损失准确性的情况下,我们在两块GPU上实现了2.58倍和3.65倍的平均实测加速比,明显优于先前的工作。当允许有2%的适度准确性损失时,平均加速比达到3.15倍和8.52倍。在这种情况下,卷积层的模型压缩率为15.0倍,对应实测CPU加速比为11.93倍。再举一个例子,对于CIFAR - 10数据集上的ResNet - 18模型,我们在CONV层实现了前所未有的54.2倍结构化剪枝率。这比近期工作的剪枝率高32倍,与原始未剪枝的DNN模型相比,在Adreno 640移动GPU上可进一步转化为7.6倍的推理时间加速比。我们在链接http://bit.ly/2M0V7DO分享我们的代码和模型。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验