• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

在白盒中进行卷积神经网络(CNN)通道剪枝

Carrying Out CNN Channel Pruning in a White Box.

作者信息

Zhang Yuxin, Lin Mingbao, Lin Chia-Wen, Chen Jie, Wu Yongjian, Tian Yonghong, Ji Rongrong

出版信息

IEEE Trans Neural Netw Learn Syst. 2023 Oct;34(10):7946-7955. doi: 10.1109/TNNLS.2022.3147269. Epub 2023 Oct 5.

DOI:10.1109/TNNLS.2022.3147269
PMID:35157600
Abstract

Channel pruning has been long studied to compress convolutional neural networks (CNNs), which significantly reduces the overall computation. Prior works implement channel pruning in an unexplainable manner, which tends to reduce the final classification errors while failing to consider the internal influence of each channel. In this article, we conduct channel pruning in a white box. Through deep visualization of feature maps activated by different channels, we observe that different channels have a varying contribution to different categories in image classification. Inspired by this, we choose to preserve channels contributing to most categories. Specifically, to model the contribution of each channel to differentiating categories, we develop a class-wise mask for each channel, implemented in a dynamic training manner with respect to the input image's category. On the basis of the learned class-wise mask, we perform a global voting mechanism to remove channels with less category discrimination. Lastly, a fine-tuning process is conducted to recover the performance of the pruned model. To our best knowledge, it is the first time that CNN interpretability theory is considered to guide channel pruning. Extensive experiments on representative image classification tasks demonstrate the superiority of our White-Box over many state-of-the-arts (SOTAs). For instance, on CIFAR-10, it reduces 65.23% floating point operations per seconds (FLOPs) with even 0.62% accuracy improvement for ResNet-110. On ILSVRC-2012, White-Box achieves a 45.6% FLOP reduction with only a small loss of 0.83% in the top-1 accuracy for ResNet-50. Code is available at https://github.com/zyxxmu/White-Box.

摘要

长期以来,人们一直在研究通道剪枝以压缩卷积神经网络(CNN),这可显著减少整体计算量。先前的工作以一种无法解释的方式实现通道剪枝,这种方式往往会减少最终的分类错误,但却没有考虑每个通道的内部影响。在本文中,我们在白盒环境下进行通道剪枝。通过对不同通道激活的特征图进行深度可视化,我们观察到不同通道在图像分类中对不同类别有不同的贡献。受此启发,我们选择保留对大多数类别有贡献的通道。具体而言,为了对每个通道在区分类别方面的贡献进行建模,我们为每个通道开发了一个类别掩码,并根据输入图像的类别以动态训练的方式来实现。基于学习到的类别掩码,我们执行全局投票机制以去除类别区分能力较弱的通道。最后,进行微调过程以恢复剪枝后模型的性能。据我们所知,这是首次考虑将CNN可解释性理论用于指导通道剪枝。在代表性图像分类任务上的大量实验证明了我们的白盒方法优于许多现有技术(SOTA)。例如,在CIFAR - 10数据集上,对于ResNet - 110,它每秒减少65.23%的浮点运算量(FLOPs),同时准确率还提高了0.62%。在ILSVRC - 2012数据集上,对于ResNet - 50,白盒方法实现了45.6%的FLOP减少,而top - 1准确率仅小幅下降0.83%。代码可在https://github.com/zyxxmu/White - Box获取。

相似文献

1
Carrying Out CNN Channel Pruning in a White Box.在白盒中进行卷积神经网络(CNN)通道剪枝
IEEE Trans Neural Netw Learn Syst. 2023 Oct;34(10):7946-7955. doi: 10.1109/TNNLS.2022.3147269. Epub 2023 Oct 5.
2
Random pruning: channel sparsity by expectation scaling factor.随机剪枝:通过期望缩放因子实现通道稀疏性
PeerJ Comput Sci. 2023 Sep 5;9:e1564. doi: 10.7717/peerj-cs.1564. eCollection 2023.
3
Dynamical Conventional Neural Network Channel Pruning by Genetic Wavelet Channel Search for Image Classification.基于遗传小波通道搜索的动态传统神经网络通道剪枝用于图像分类
Front Comput Neurosci. 2021 Oct 27;15:760554. doi: 10.3389/fncom.2021.760554. eCollection 2021.
4
Discrimination-Aware Network Pruning for Deep Model Compression.面向深度模型压缩的歧视感知网络剪枝。
IEEE Trans Pattern Anal Mach Intell. 2022 Aug;44(8):4035-4051. doi: 10.1109/TPAMI.2021.3066410. Epub 2022 Jul 1.
5
HRel: Filter pruning based on High Relevance between activation maps and class labels.HRel:基于激活图与类别标签之间的高相关性的滤波器修剪。
Neural Netw. 2022 Mar;147:186-197. doi: 10.1016/j.neunet.2021.12.017. Epub 2021 Dec 30.
6
Weak sub-network pruning for strong and efficient neural networks.弱子网络剪枝技术:构建强大而高效的神经网络
Neural Netw. 2021 Dec;144:614-626. doi: 10.1016/j.neunet.2021.09.015. Epub 2021 Sep 30.
7
Network Pruning Using Adaptive Exemplar Filters.使用自适应样本滤波器的网络剪枝
IEEE Trans Neural Netw Learn Syst. 2022 Dec;33(12):7357-7366. doi: 10.1109/TNNLS.2021.3084856. Epub 2022 Nov 30.
8
Filter Sketch for Network Pruning.用于网络剪枝的滤波器草图
IEEE Trans Neural Netw Learn Syst. 2022 Dec;33(12):7091-7100. doi: 10.1109/TNNLS.2021.3084206. Epub 2022 Nov 30.
9
Pruning Networks With Cross-Layer Ranking & k-Reciprocal Nearest Filters.基于跨层排序和k近邻互反滤波器的网络剪枝
IEEE Trans Neural Netw Learn Syst. 2023 Nov;34(11):9139-9148. doi: 10.1109/TNNLS.2022.3156047. Epub 2023 Oct 27.
10
Filter Pruning Based on Information Capacity and Independence.基于信息容量和独立性的滤波器剪枝
IEEE Trans Neural Netw Learn Syst. 2025 May;36(5):8401-8413. doi: 10.1109/TNNLS.2024.3415068. Epub 2025 May 2.

引用本文的文献

1
Multi-objective evolutionary optimization for hardware-aware neural network pruning.用于硬件感知神经网络剪枝的多目标进化优化
Fundam Res. 2022 Aug 9;4(4):941-950. doi: 10.1016/j.fmre.2022.07.013. eCollection 2024 Jul.
2
Random pruning: channel sparsity by expectation scaling factor.随机剪枝:通过期望缩放因子实现通道稀疏性
PeerJ Comput Sci. 2023 Sep 5;9:e1564. doi: 10.7717/peerj-cs.1564. eCollection 2023.