• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

通过去除空间和核间冗余加速卷积神经网络。

Accelerating Convolutional Neural Networks by Removing Interspatial and Interkernel Redundancies.

出版信息

IEEE Trans Cybern. 2020 Feb;50(2):452-464. doi: 10.1109/TCYB.2018.2873762. Epub 2018 Oct 18.

DOI:10.1109/TCYB.2018.2873762
PMID:30346299
Abstract

Recently, the high computational resource demands of convolutional neural networks (CNNs) have hindered a wide range of their applications. To solve this problem, many previous works attempted to reduce the redundant calculations during the evaluation of CNNs. However, these works mainly focused on either interspatial or interkernel redundancy. In this paper, we further accelerate existing CNNs by removing both types of redundancies. First, we convert interspatial redundancy into interkernel redundancy by decomposing one convolutional layer to one block that we design. Then, we adopt rank-selection and pruning methods to remove the interkernel redundancy. The rank-selection method, which considerably reduces manpower, contributes to determining the number of kernels to be pruned in the pruning method. We apply a layer-wise training algorithm rather than the traditional end-to-end training to overcome the difficulty of convergence. Finally, we fine-tune the entire network to achieve better performance. Our method is applied on three widely used datasets of an image classification task. We achieve better results in terms of accuracy and compression rate compared with previous state-of-the-art methods.

摘要

最近,卷积神经网络(CNN)的高计算资源需求阻碍了它们的广泛应用。为了解决这个问题,许多先前的工作试图减少 CNN 评估过程中的冗余计算。然而,这些工作主要集中在空间或内核冗余上。在本文中,我们通过去除这两种类型的冗余进一步加速现有的 CNN。首先,我们通过将一个卷积层分解为我们设计的一个块,将空间冗余转化为内核冗余。然后,我们采用秩选择和剪枝方法来去除内核冗余。秩选择方法大大减少了人力,有助于确定剪枝方法中要剪枝的核的数量。我们采用逐层训练算法而不是传统的端到端训练来克服收敛困难。最后,我们对整个网络进行微调以获得更好的性能。我们的方法应用于图像分类任务的三个广泛使用的数据集上。与先前的最先进方法相比,我们在准确性和压缩率方面取得了更好的结果。

相似文献

1
Accelerating Convolutional Neural Networks by Removing Interspatial and Interkernel Redundancies.通过去除空间和核间冗余加速卷积神经网络。
IEEE Trans Cybern. 2020 Feb;50(2):452-464. doi: 10.1109/TCYB.2018.2873762. Epub 2018 Oct 18.
2
Shallowing Deep Networks: Layer-Wise Pruning Based on Feature Representations.浅化神经网络:基于特征表示的逐层剪枝。
IEEE Trans Pattern Anal Mach Intell. 2019 Dec;41(12):3048-3056. doi: 10.1109/TPAMI.2018.2874634. Epub 2018 Oct 8.
3
Asymptotic Soft Filter Pruning for Deep Convolutional Neural Networks.渐进式软滤波器剪枝在深度卷积神经网络中的应用。
IEEE Trans Cybern. 2020 Aug;50(8):3594-3604. doi: 10.1109/TCYB.2019.2933477. Epub 2019 Aug 27.
4
Discrimination-Aware Network Pruning for Deep Model Compression.面向深度模型压缩的歧视感知网络剪枝。
IEEE Trans Pattern Anal Mach Intell. 2022 Aug;44(8):4035-4051. doi: 10.1109/TPAMI.2021.3066410. Epub 2022 Jul 1.
5
Holistic CNN Compression via Low-Rank Decomposition with Knowledge Transfer.通过低秩分解与知识迁移实现整体卷积神经网络压缩
IEEE Trans Pattern Anal Mach Intell. 2019 Dec;41(12):2889-2905. doi: 10.1109/TPAMI.2018.2873305. Epub 2018 Oct 1.
6
Cross-layer importance evaluation for neural network pruning.神经网络剪枝的跨层重要性评估。
Neural Netw. 2024 Nov;179:106496. doi: 10.1016/j.neunet.2024.106496. Epub 2024 Jul 3.
7
Implementation of Lightweight Convolutional Neural Networks via Layer-Wise Differentiable Compression.通过逐层可微分压缩实现轻量化卷积神经网络。
Sensors (Basel). 2021 May 16;21(10):3464. doi: 10.3390/s21103464.
8
Redundancy-Aware Pruning of Convolutional Neural Networks.冗余感知卷积神经网络剪枝。
Neural Comput. 2020 Dec;32(12):2532-2556. doi: 10.1162/neco_a_01330. Epub 2020 Oct 20.
9
A Dual Neural Architecture Combined SqueezeNet with OctConv for LiDAR Data Classification.一种结合 SqueezeNet 和 OctConv 的双重神经架构的 LiDAR 数据分类方法。
Sensors (Basel). 2019 Nov 12;19(22):4927. doi: 10.3390/s19224927.
10
Manipulating Identical Filter Redundancy for Efficient Pruning on Deep and Complicated CNN.为在深度复杂卷积神经网络上进行高效剪枝而操纵相同滤波器冗余
IEEE Trans Neural Netw Learn Syst. 2024 Nov;35(11):16831-16844. doi: 10.1109/TNNLS.2023.3298263. Epub 2024 Oct 29.