• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

基于膝关节引导的深度神经网络压缩进化算法。

A Knee-Guided Evolutionary Algorithm for Compressing Deep Neural Networks.

出版信息

IEEE Trans Cybern. 2021 Mar;51(3):1626-1638. doi: 10.1109/TCYB.2019.2928174. Epub 2021 Feb 17.

DOI:10.1109/TCYB.2019.2928174
PMID:31380778
Abstract

Deep neural networks (DNNs) have been regarded as fundamental tools for many disciplines. Meanwhile, they are known for their large-scale parameters, high redundancy in weights, and extensive computing resource consumptions, which pose a tremendous challenge to the deployment in real-time applications or on resource-constrained devices. To cope with this issue, compressing DNNs for accelerating its inference has drawn extensive interest recently. The basic idea is to prune parameters with little performance degradation. However, the overparameterized nature and the conflict between parameters reduction and performance maintenance make it prohibitive to manually search the pruning parameter space. In this paper, we formally establish filter pruning as a multiobjective optimization problem, and propose a knee-guided evolutionary algorithm (KGEA) that can automatically search for the solution with quality tradeoff between the scale of parameters and performance, in which both conflicting objectives can be optimized simultaneously. In particular, by incorporating a minimum Manhattan distance approach, the search effort in the proposed KGEA is explicitly guided toward the knee area, which greatly facilitates the manual search for a good tradeoff solution. Moreover, the parameter importance is directly estimated on the criterion of performance loss, which can robustly identify the redundancy. In addition to the knee solution, a performance-improved model can also be found in a fine-tuning-free fashion. The experiments on compressing fully convolutional LeNet and VGG-19 networks validate the superiority of the proposed algorithm over the state-of-the-art competing methods.

摘要

深度神经网络(DNN)被视为许多学科的基本工具。同时,它们的参数规模大、权重冗余度高、计算资源消耗大,这给其在实时应用或资源受限设备中的部署带来了巨大挑战。为了解决这个问题,最近人们对压缩 DNN 以加速其推理产生了浓厚的兴趣。基本思想是剪枝参数而不降低性能。然而,超参数化的性质以及参数减少和性能维护之间的冲突使得手动搜索剪枝参数空间变得非常困难。在本文中,我们正式将滤波器剪枝建模为一个多目标优化问题,并提出了一种基于膝盖引导的进化算法(KGEA),它可以自动搜索具有参数规模和性能之间质量权衡的解决方案,其中两个冲突的目标可以同时得到优化。特别是,通过引入最小曼哈顿距离方法,所提出的 KGEA 的搜索工作被明确引导到膝盖区域,这极大地促进了手动搜索良好的权衡解决方案。此外,参数重要性是根据性能损失的标准直接估计的,这可以稳健地识别冗余。除了膝盖解决方案外,还可以以无需微调的方式找到性能提高的模型。在压缩全卷积 LeNet 和 VGG-19 网络的实验中,验证了所提出的算法优于最先进的竞争方法。

相似文献

1
A Knee-Guided Evolutionary Algorithm for Compressing Deep Neural Networks.基于膝关节引导的深度神经网络压缩进化算法。
IEEE Trans Cybern. 2021 Mar;51(3):1626-1638. doi: 10.1109/TCYB.2019.2928174. Epub 2021 Feb 17.
2
Evolutionary Shallowing Deep Neural Networks at Block Levels.在块级别上对深度神经网络进行演化。
IEEE Trans Neural Netw Learn Syst. 2022 Sep;33(9):4635-4647. doi: 10.1109/TNNLS.2021.3059529. Epub 2022 Aug 31.
3
Evolutionary Multi-Objective One-Shot Filter Pruning for Designing Lightweight Convolutional Neural Network.进化多目标单拍滤波器剪枝用于设计轻量级卷积神经网络。
Sensors (Basel). 2021 Sep 2;21(17):5901. doi: 10.3390/s21175901.
4
Asymptotic Soft Filter Pruning for Deep Convolutional Neural Networks.渐进式软滤波器剪枝在深度卷积神经网络中的应用。
IEEE Trans Cybern. 2020 Aug;50(8):3594-3604. doi: 10.1109/TCYB.2019.2933477. Epub 2019 Aug 27.
5
Adding Before Pruning: Sparse Filter Fusion for Deep Convolutional Neural Networks via Auxiliary Attention.剪枝前添加:通过辅助注意力实现深度卷积神经网络的稀疏滤波器融合
IEEE Trans Neural Netw Learn Syst. 2025 Mar;36(3):3930-3942. doi: 10.1109/TNNLS.2021.3106917. Epub 2025 Feb 28.
6
Evolutionary Compression of Deep Neural Networks for Biomedical Image Segmentation.用于生物医学图像分割的深度神经网络的演化压缩。
IEEE Trans Neural Netw Learn Syst. 2020 Aug;31(8):2916-2929. doi: 10.1109/TNNLS.2019.2933879. Epub 2019 Sep 13.
7
Deep Neural Network Compression by In-Parallel Pruning-Quantization.通过并行剪枝-量化实现深度神经网络压缩。
IEEE Trans Pattern Anal Mach Intell. 2020 Mar;42(3):568-579. doi: 10.1109/TPAMI.2018.2886192. Epub 2018 Dec 12.
8
Redundant feature pruning for accelerated inference in deep neural networks.冗余特征剪枝在深度神经网络中的加速推理。
Neural Netw. 2019 Oct;118:148-158. doi: 10.1016/j.neunet.2019.04.021. Epub 2019 May 9.
9
Model pruning based on filter similarity for edge device deployment.基于滤波器相似度的模型剪枝用于边缘设备部署。
Front Neurorobot. 2023 Mar 2;17:1132679. doi: 10.3389/fnbot.2023.1132679. eCollection 2023.
10
Joint Structure and Parameter Optimization of Multiobjective Sparse Neural Network.多目标稀疏神经网络的关节结构和参数优化。
Neural Comput. 2021 Mar 26;33(4):1113-1143. doi: 10.1162/neco_a_01368.

引用本文的文献

1
Efficient compression of encoder-decoder models for semantic segmentation using the separation index.使用分离指数对用于语义分割的编码器-解码器模型进行高效压缩。
Sci Rep. 2025 Jul 9;15(1):24639. doi: 10.1038/s41598-025-10348-9.
2
Model pruning based on filter similarity for edge device deployment.基于滤波器相似度的模型剪枝用于边缘设备部署。
Front Neurorobot. 2023 Mar 2;17:1132679. doi: 10.3389/fnbot.2023.1132679. eCollection 2023.
3
Joint design and compression of convolutional neural networks as a Bi-level optimization problem.
作为双层优化问题的卷积神经网络的联合设计与压缩
Neural Comput Appl. 2022;34(17):15007-15029. doi: 10.1007/s00521-022-07331-0. Epub 2022 May 17.
4
Fitting thermodynamic-based models: Incorporating parameter sensitivity improves the performance of an evolutionary algorithm.拟合热力学模型:纳入参数敏感性可提高进化算法的性能。
Math Biosci. 2021 Dec;342:108716. doi: 10.1016/j.mbs.2021.108716. Epub 2021 Oct 21.