• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

基于结构稀疏性的中等粒度核元素剪枝。

Intermediate-grained kernel elements pruning with structured sparsity.

机构信息

School of Computer Science and Technology, Xidian University, No. 2 South Taibai Road, Xi'an, 710071, PR China.

出版信息

Neural Netw. 2024 Dec;180:106708. doi: 10.1016/j.neunet.2024.106708. Epub 2024 Sep 7.

DOI:10.1016/j.neunet.2024.106708
PMID:39276589
Abstract

Neural network pruning provides a promising prospect for the deployment of neural networks on embedded or mobile devices with limited resources. Although current structured strategies are unconstrained by specific hardware architecture in the phase of forward inference, the decline in classification accuracy of structured methods is beyond the tolerance at the level of general pruning rate. This inspires us to develop a technique that satisfies high pruning rate with a small decline in accuracy and has the general nature of structured pruning. In this paper, we propose a new pruning method, namely KEP (Kernel Elements Pruning), to compress deep convolutional neural networks by exploring the significance of elements in each kernel plane and removing unimportant elements. In this method, we apply a controllable regularization penalty to constrain unimportant elements by adding a prior knowledge mask and obtain a compact model. In the calculation procedure of forward inference, we introduce a sparse convolution operation which is different from the sliding window to eliminate invalid zero calculations and verify the effectiveness of the operation for further deployment on FPGA. A massive variety of experiments demonstrate the effectiveness of KEP on two datasets: CIFAR-10 and ImageNet. Specially, with few indexes of non-zero weights introduced, KEP has a significant improvement over the latest structured methods in terms of parameter and float-point operation (FLOPs) reduction, and performs well on large datasets.

摘要

神经网络剪枝为在资源有限的嵌入式或移动设备上部署神经网络提供了广阔的前景。虽然目前的结构化策略在正向推理阶段不受特定硬件架构的限制,但结构化方法的分类精度下降超出了一般剪枝率水平的容忍度。这启发我们开发一种技术,在保持高精度的同时满足高剪枝率,并具有结构化剪枝的通用性。在本文中,我们提出了一种新的剪枝方法,即 KEP(核元素剪枝),通过探索每个核平面中元素的重要性并去除不重要的元素来压缩深度卷积神经网络。在这种方法中,我们通过添加先验知识掩码来应用可控制的正则化惩罚来约束不重要的元素,并获得一个紧凑的模型。在正向推理的计算过程中,我们引入了一种稀疏卷积操作,与滑动窗口不同,以消除无效的零计算,并验证了该操作在进一步部署到 FPGA 上的有效性。大量的实验证明了 KEP 在两个数据集(CIFAR-10 和 ImageNet)上的有效性。特别地,通过引入少量的非零权重指标,KEP 在参数和浮点运算(FLOPs)减少方面比最新的结构化方法有显著的改进,并且在大型数据集上表现良好。

相似文献

1
Intermediate-grained kernel elements pruning with structured sparsity.基于结构稀疏性的中等粒度核元素剪枝。
Neural Netw. 2024 Dec;180:106708. doi: 10.1016/j.neunet.2024.106708. Epub 2024 Sep 7.
2
GRIM: A General, Real-Time Deep Learning Inference Framework for Mobile Devices Based on Fine-Grained Structured Weight Sparsity.GRIM:一种基于细粒度结构化权重稀疏化的用于移动设备的通用、实时深度学习推理框架。
IEEE Trans Pattern Anal Mach Intell. 2022 Oct;44(10):6224-6239. doi: 10.1109/TPAMI.2021.3089687. Epub 2022 Sep 14.
3
Feature flow regularization: Improving structured sparsity in deep neural networks.特征流正则化:改善深度神经网络中的结构化稀疏性。
Neural Netw. 2023 Apr;161:598-613. doi: 10.1016/j.neunet.2023.02.013. Epub 2023 Feb 13.
4
Weak sub-network pruning for strong and efficient neural networks.弱子网络剪枝技术:构建强大而高效的神经网络
Neural Netw. 2021 Dec;144:614-626. doi: 10.1016/j.neunet.2021.09.015. Epub 2021 Sep 30.
5
Dynamical Conventional Neural Network Channel Pruning by Genetic Wavelet Channel Search for Image Classification.基于遗传小波通道搜索的动态传统神经网络通道剪枝用于图像分类
Front Comput Neurosci. 2021 Oct 27;15:760554. doi: 10.3389/fncom.2021.760554. eCollection 2021.
6
Random pruning: channel sparsity by expectation scaling factor.随机剪枝:通过期望缩放因子实现通道稀疏性
PeerJ Comput Sci. 2023 Sep 5;9:e1564. doi: 10.7717/peerj-cs.1564. eCollection 2023.
7
StructADMM: Achieving Ultrahigh Efficiency in Structured Pruning for DNNs.结构化交替方向乘子法(StructADMM):在深度神经网络的结构化剪枝中实现超高效率
IEEE Trans Neural Netw Learn Syst. 2022 May;33(5):2259-2273. doi: 10.1109/TNNLS.2020.3045153. Epub 2022 May 2.
8
Jump-GRS: a multi-phase approach to structured pruning of neural networks for neural decoding.Jump-GRS:一种用于神经解码的神经网络结构化剪枝的多阶段方法。
J Neural Eng. 2023 Jul 31;20(4). doi: 10.1088/1741-2552/ace5dc.
9
Reweighted Alternating Direction Method of Multipliers for DNN weight pruning.基于重加权交替方向乘子法的 DNN 权值剪枝。
Neural Netw. 2024 Nov;179:106534. doi: 10.1016/j.neunet.2024.106534. Epub 2024 Jul 14.
10
Coarse-Grained Pruning of Neural Network Models Based on Blocky Sparse Structure.基于块状稀疏结构的神经网络模型粗粒度剪枝
Entropy (Basel). 2021 Aug 13;23(8):1042. doi: 10.3390/e23081042.