• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

剪枝算法——一项综述。

Pruning algorithms-a survey.

作者信息

Reed R

机构信息

Dept. of Electr. Eng., Washington Univ., Seattle, WA.

出版信息

IEEE Trans Neural Netw. 1993;4(5):740-7. doi: 10.1109/72.248452.

DOI:10.1109/72.248452
PMID:18276504
Abstract

A rule of thumb for obtaining good generalization in systems trained by examples is that one should use the smallest system that will fit the data. Unfortunately, it usually is not obvious what size is best; a system that is too small will not be able to learn the data while one that is just big enough may learn very slowly and be very sensitive to initial conditions and learning parameters. This paper is a survey of neural network pruning algorithms. The approach taken by the methods described here is to train a network that is larger than necessary and then remove the parts that are not needed.

摘要

在通过示例训练的系统中,获得良好泛化能力的一个经验法则是使用能够拟合数据的最小系统。不幸的是,通常并不清楚最佳的系统规模是多少;太小的系统无法学习数据,而规模刚好足够大的系统可能学习速度非常慢,并且对初始条件和学习参数非常敏感。本文是对神经网络剪枝算法的综述。这里描述的方法所采取的途径是训练一个比所需规模更大的网络,然后去除不需要的部分。

相似文献

1
Pruning algorithms-a survey.剪枝算法——一项综述。
IEEE Trans Neural Netw. 1993;4(5):740-7. doi: 10.1109/72.248452.
2
A growing and pruning sequential learning algorithm of hyper basis function neural network for function approximation.超基函数神经网络用于函数逼近的生长和修剪序贯学习算法。
Neural Netw. 2013 Oct;46:210-26. doi: 10.1016/j.neunet.2013.06.004. Epub 2013 Jun 14.
3
Pruning recurrent neural networks for improved generalization performance.通过剪枝循环神经网络提高泛化性能。
IEEE Trans Neural Netw. 1994;5(5):848-51. doi: 10.1109/72.317740.
4
Pruning artificial neural networks using neural complexity measures.使用神经复杂性度量来修剪人工神经网络。
Int J Neural Syst. 2008 Oct;18(5):389-403. doi: 10.1142/S012906570800166X.
5
An iterative pruning algorithm for feedforward neural networks.一种用于前馈神经网络的迭代剪枝算法。
IEEE Trans Neural Netw. 1997;8(3):519-31. doi: 10.1109/72.572092.
6
Radical pruning: a method to construct skeleton radial basis function networks.激进剪枝:一种构建骨架径向基函数网络的方法。
Int J Neural Syst. 2000 Apr;10(2):143-54. doi: 10.1142/S0129065700000120.
7
On the Kalman filtering method in neural network training and pruning.
IEEE Trans Neural Netw. 1999;10(1):161-6. doi: 10.1109/72.737502.
8
A Lempel-Ziv complexity-based neural network pruning algorithm.基于 Lempel-Ziv 复杂度的神经网络剪枝算法。
Int J Neural Syst. 2011 Oct;21(5):427-41. doi: 10.1142/S0129065711002936.
9
Pruning and model-selecting algorithms in the RBF frameworks constructed by support vector learning.支持向量学习构建的径向基函数框架中的剪枝和模型选择算法。
Int J Neural Syst. 2006 Aug;16(4):283-93. doi: 10.1142/S0129065706000688.
10
A generalized growing and pruning RBF (GGAP-RBF) neural network for function approximation.一种用于函数逼近的广义生长与剪枝径向基函数(GGAP-RBF)神经网络。
IEEE Trans Neural Netw. 2005 Jan;16(1):57-67. doi: 10.1109/TNN.2004.836241.

引用本文的文献

1
Decentralized Distributed Sequential Neural Networks Inference on Low-Power Microcontrollers in Wireless Sensor Networks: A Predictive Maintenance Case Study.无线传感器网络中低功耗微控制器上的分散式分布式顺序神经网络推理:一个预测性维护案例研究
Sensors (Basel). 2025 Jul 24;25(15):4595. doi: 10.3390/s25154595.
2
Efficient compression of encoder-decoder models for semantic segmentation using the separation index.使用分离指数对用于语义分割的编码器-解码器模型进行高效压缩。
Sci Rep. 2025 Jul 9;15(1):24639. doi: 10.1038/s41598-025-10348-9.
3
A survey of model compression techniques: past, present, and future.
模型压缩技术综述:过去、现在与未来
Front Robot AI. 2025 Mar 20;12:1518965. doi: 10.3389/frobt.2025.1518965. eCollection 2025.
4
Back-propagation-assisted inverse design of structured light fields for given profiles of optical force.基于光学力给定分布的结构光场反向传播辅助逆设计
Nanophotonics. 2023 May 1;12(11):2019-2027. doi: 10.1515/nanoph-2023-0101. eCollection 2023 May.
5
LightAWNet: Lightweight adaptive weighting network based on dynamic convolutions for medical image segmentation.LightAWNet:基于动态卷积的轻量级自适应加权网络用于医学图像分割。
J Appl Clin Med Phys. 2025 Feb;26(2):e14584. doi: 10.1002/acm2.14584. Epub 2024 Dec 1.
6
Dynamic multilayer growth: Parallel vs. sequential approaches.动态多层生长:平行与顺序方法。
PLoS One. 2024 May 9;19(5):e0301513. doi: 10.1371/journal.pone.0301513. eCollection 2024.
7
Brain Age Prediction Using Multi-Hop Graph Attention Combined with Convolutional Neural Network.基于多跳图注意力与卷积神经网络相结合的脑年龄预测
Bioengineering (Basel). 2024 Mar 8;11(3):265. doi: 10.3390/bioengineering11030265.
8
Compressed models for co-reference resolution: enhancing efficiency with debiased word embeddings.用于共指消解的压缩模型:通过去偏词嵌入提高效率。
Sci Rep. 2023 Oct 28;13(1):18510. doi: 10.1038/s41598-023-45677-0.
9
An interpretable deep learning approach for designing nanoporous silicon nitride membranes with tunable mechanical properties.一种用于设计具有可调机械性能的纳米多孔氮化硅膜的可解释深度学习方法。
NPJ Comput Mater. 2023;9(1):82. doi: 10.1038/s41524-023-01037-0. Epub 2023 May 27.
10
A Survey on Low-Latency DNN-Based Speech Enhancement.基于 DNN 的低延迟语音增强技术研究综述
Sensors (Basel). 2023 Jan 26;23(3):1380. doi: 10.3390/s23031380.