• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

通过逐层可微分压缩实现轻量化卷积神经网络。

Implementation of Lightweight Convolutional Neural Networks via Layer-Wise Differentiable Compression.

机构信息

Institute of Microelectronics, Chinese Academy of Sciences, Beijing 100029, China.

University of Chinese Academy of Sciences, Beijing 100049, China.

出版信息

Sensors (Basel). 2021 May 16;21(10):3464. doi: 10.3390/s21103464.

DOI:10.3390/s21103464
PMID:34065680
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC8155900/
Abstract

Convolutional neural networks (CNNs) have achieved significant breakthroughs in various domains, such as natural language processing (NLP), and computer vision. However, performance improvement is often accompanied by large model size and computation costs, which make it not suitable for resource-constrained devices. Consequently, there is an urgent need to compress CNNs, so as to reduce model size and computation costs. This paper proposes a layer-wise differentiable compression (LWDC) algorithm for compressing CNNs structurally. A differentiable selection operator OS is embedded in the model to compress and train the model simultaneously by gradient descent in one go. Instead of pruning parameters from redundant operators by contrast to most of the existing methods, our method replaces the original bulky operators with more lightweight ones directly, which only needs to specify the set of lightweight operators and the regularization factor in advance, rather than the compression rate for each layer. The compressed model produced by our method is generic and does not need any special hardware/software support. Experimental results on CIFAR-10, CIFAR-100 and ImageNet have demonstrated the effectiveness of our method. LWDC obtains more significant compression than state-of-the-art methods in most cases, while having lower performance degradation. The impact of lightweight operators and regularization factor on the compression rate and accuracy also is evaluated.

摘要

卷积神经网络 (CNN) 在自然语言处理 (NLP) 和计算机视觉等各个领域都取得了重大突破。然而,性能的提高往往伴随着模型尺寸和计算成本的增加,这使得它们不适合资源受限的设备。因此,迫切需要对 CNN 进行压缩,以减小模型尺寸和计算成本。本文提出了一种基于层的可微分压缩 (LWDC) 算法,用于对 CNN 进行结构压缩。在模型中嵌入一个可微分选择算子 OS,通过梯度下降在一次迭代中同时压缩和训练模型。与大多数现有方法通过从冗余算子中修剪参数不同,我们的方法直接用更轻量级的算子替换原始的庞大算子,只需要提前指定轻量级算子集和正则化因子,而不需要为每层指定压缩率。我们的方法生成的压缩模型是通用的,不需要任何特殊的硬件/软件支持。在 CIFAR-10、CIFAR-100 和 ImageNet 上的实验结果表明了我们方法的有效性。LWDC 在大多数情况下比最先进的方法获得了更大的压缩率,同时性能下降更小。还评估了轻量级算子和正则化因子对压缩率和准确性的影响。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c748/8155900/2615f86a067d/sensors-21-03464-g010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c748/8155900/1bd9e824032d/sensors-21-03464-g0A1.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c748/8155900/e82c0ed37f84/sensors-21-03464-g0A2.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c748/8155900/5bc4fbbc808f/sensors-21-03464-g0A3.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c748/8155900/6dc212555577/sensors-21-03464-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c748/8155900/2c4e721bfd51/sensors-21-03464-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c748/8155900/43e6876e08f1/sensors-21-03464-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c748/8155900/437343d88d20/sensors-21-03464-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c748/8155900/5686d1ce66c2/sensors-21-03464-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c748/8155900/bca14a1caa2a/sensors-21-03464-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c748/8155900/ed9069a9cf91/sensors-21-03464-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c748/8155900/557be3f642e9/sensors-21-03464-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c748/8155900/6a544fb9f1bf/sensors-21-03464-g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c748/8155900/2615f86a067d/sensors-21-03464-g010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c748/8155900/1bd9e824032d/sensors-21-03464-g0A1.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c748/8155900/e82c0ed37f84/sensors-21-03464-g0A2.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c748/8155900/5bc4fbbc808f/sensors-21-03464-g0A3.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c748/8155900/6dc212555577/sensors-21-03464-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c748/8155900/2c4e721bfd51/sensors-21-03464-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c748/8155900/43e6876e08f1/sensors-21-03464-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c748/8155900/437343d88d20/sensors-21-03464-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c748/8155900/5686d1ce66c2/sensors-21-03464-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c748/8155900/bca14a1caa2a/sensors-21-03464-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c748/8155900/ed9069a9cf91/sensors-21-03464-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c748/8155900/557be3f642e9/sensors-21-03464-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c748/8155900/6a544fb9f1bf/sensors-21-03464-g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c748/8155900/2615f86a067d/sensors-21-03464-g010.jpg

相似文献

1
Implementation of Lightweight Convolutional Neural Networks via Layer-Wise Differentiable Compression.通过逐层可微分压缩实现轻量化卷积神经网络。
Sensors (Basel). 2021 May 16;21(10):3464. doi: 10.3390/s21103464.
2
Weak sub-network pruning for strong and efficient neural networks.弱子网络剪枝技术:构建强大而高效的神经网络
Neural Netw. 2021 Dec;144:614-626. doi: 10.1016/j.neunet.2021.09.015. Epub 2021 Sep 30.
3
Training Lightweight Deep Convolutional Neural Networks Using Bag-of-Features Pooling.使用特征袋池化训练轻量级深度卷积神经网络
IEEE Trans Neural Netw Learn Syst. 2019 Jun;30(6):1705-1715. doi: 10.1109/TNNLS.2018.2872995. Epub 2018 Oct 24.
4
ChannelNets: Compact and Efficient Convolutional Neural Networks via Channel-Wise Convolutions.通道网络:通过通道卷积实现紧凑高效的卷积神经网络。
IEEE Trans Pattern Anal Mach Intell. 2021 Aug;43(8):2570-2581. doi: 10.1109/TPAMI.2020.2975796. Epub 2021 Jul 1.
5
Model Compression Based on Differentiable Network Channel Pruning.基于可微网络通道剪枝的模型压缩
IEEE Trans Neural Netw Learn Syst. 2023 Dec;34(12):10203-10212. doi: 10.1109/TNNLS.2022.3165123. Epub 2023 Nov 30.
6
Dynamical Conventional Neural Network Channel Pruning by Genetic Wavelet Channel Search for Image Classification.基于遗传小波通道搜索的动态传统神经网络通道剪枝用于图像分类
Front Comput Neurosci. 2021 Oct 27;15:760554. doi: 10.3389/fncom.2021.760554. eCollection 2021.
7
DMPP: Differentiable multi-pruner and predictor for neural network pruning.DMPP:用于神经网络剪枝的可微分多修剪器和预测器。
Neural Netw. 2022 Mar;147:103-112. doi: 10.1016/j.neunet.2021.12.020. Epub 2021 Dec 30.
8
Cross-Entropy Pruning for Compressing Convolutional Neural Networks.交叉熵剪枝用于压缩卷积神经网络。
Neural Comput. 2018 Nov;30(11):3128-3149. doi: 10.1162/neco_a_01131. Epub 2018 Sep 14.
9
Differentiable Network Pruning via Polarization of Probabilistic Channelwise Soft Masks.基于概率通道软掩模极化的可微分网络剪枝。
Comput Intell Neurosci. 2022 May 5;2022:7775419. doi: 10.1155/2022/7775419. eCollection 2022.
10
DAIS: Automatic Channel Pruning via Differentiable Annealing Indicator Search.DAIS:通过可微退火指标搜索实现自动通道剪枝
IEEE Trans Neural Netw Learn Syst. 2023 Dec;34(12):9847-9858. doi: 10.1109/TNNLS.2022.3161284. Epub 2023 Nov 30.

引用本文的文献

1
Speech emotion recognition with light weight deep neural ensemble model using hand crafted features.使用手工特征的轻量级深度神经网络集成模型进行语音情感识别。
Sci Rep. 2025 Apr 7;15(1):11824. doi: 10.1038/s41598-025-95734-z.
2
An enhanced speech emotion recognition using vision transformer.基于视觉转换器的增强型语音情感识别。
Sci Rep. 2024 Jun 7;14(1):13126. doi: 10.1038/s41598-024-63776-4.

本文引用的文献

1
A Lightweight Convolutional Neural Network Architecture Applied for Bone Metastasis Classification in Nuclear Medicine: A Case Study on Prostate Cancer Patients.一种应用于核医学骨转移分类的轻量级卷积神经网络架构:以前列腺癌患者为例的研究
Healthcare (Basel). 2020 Nov 18;8(4):493. doi: 10.3390/healthcare8040493.
2
EDP: An Efficient Decomposition and Pruning Scheme for Convolutional Neural Network Compression.EDP:一种用于卷积神经网络压缩的高效分解与剪枝方案。
IEEE Trans Neural Netw Learn Syst. 2021 Oct;32(10):4499-4513. doi: 10.1109/TNNLS.2020.3018177. Epub 2021 Oct 5.
3
Asymptotic Soft Filter Pruning for Deep Convolutional Neural Networks.
渐进式软滤波器剪枝在深度卷积神经网络中的应用。
IEEE Trans Cybern. 2020 Aug;50(8):3594-3604. doi: 10.1109/TCYB.2019.2933477. Epub 2019 Aug 27.