• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

基于残差的敏感性正则化:迈向深度稀疏神经网络。

LOss-Based SensiTivity rEgulaRization: Towards deep sparse neural networks.

机构信息

Università degli Studi di Torino, corso Svizzera 185, Torino, Italy; LTCI, Télécom Paris, Institut Polytechnique de Paris, France.

Università degli Studi di Torino, corso Svizzera 185, Torino, Italy.

出版信息

Neural Netw. 2022 Feb;146:230-237. doi: 10.1016/j.neunet.2021.11.029. Epub 2021 Dec 2.

DOI:10.1016/j.neunet.2021.11.029
PMID:34906759
Abstract

LOBSTER (LOss-Based SensiTivity rEgulaRization) is a method for training neural networks having a sparse topology. Let the sensitivity of a network parameter be the variation of the loss function with respect to the variation of the parameter. Parameters with low sensitivity, i.e. having little impact on the loss when perturbed, are shrunk and then pruned to sparsify the network. Our method allows to train a network from scratch, i.e. without preliminary learning or rewinding. Experiments on multiple architectures and datasets show competitive compression ratios with minimal computational overhead.

摘要

LOBSTER(基于损失的敏感性正则化)是一种训练具有稀疏拓扑结构的神经网络的方法。让网络参数的敏感性是损失函数随参数变化的变化。敏感性低的参数,即当受到干扰时对损失的影响较小,被收缩然后修剪以稀疏化网络。我们的方法允许从头开始训练网络,即无需预先学习或回滚。在多个架构和数据集上的实验表明,在最小计算开销的情况下具有竞争力的压缩比。

相似文献

1
LOss-Based SensiTivity rEgulaRization: Towards deep sparse neural networks.基于残差的敏感性正则化:迈向深度稀疏神经网络。
Neural Netw. 2022 Feb;146:230-237. doi: 10.1016/j.neunet.2021.11.029. Epub 2021 Dec 2.
2
SeReNe: Sensitivity-Based Regularization of Neurons for Structured Sparsity in Neural Networks.SeReNe:基于灵敏度的神经网络神经元正则化以实现结构稀疏性
IEEE Trans Neural Netw Learn Syst. 2022 Dec;33(12):7237-7250. doi: 10.1109/TNNLS.2021.3084527. Epub 2022 Nov 30.
3
On the compression of neural networks using ℓ-norm regularization and weight pruning.使用ℓ-norm 正则化和权值剪枝对神经网络进行压缩。
Neural Netw. 2024 Mar;171:343-352. doi: 10.1016/j.neunet.2023.12.019. Epub 2023 Dec 13.
4
Feature flow regularization: Improving structured sparsity in deep neural networks.特征流正则化:改善深度神经网络中的结构化稀疏性。
Neural Netw. 2023 Apr;161:598-613. doi: 10.1016/j.neunet.2023.02.013. Epub 2023 Feb 13.
5
Weak sub-network pruning for strong and efficient neural networks.弱子网络剪枝技术:构建强大而高效的神经网络
Neural Netw. 2021 Dec;144:614-626. doi: 10.1016/j.neunet.2021.09.015. Epub 2021 Sep 30.
6
MobilePrune: Neural Network Compression via Sparse Group Lasso on the Mobile System.MobilePrune:移动系统上基于稀疏组 Lasso 的神经网络压缩。
Sensors (Basel). 2022 May 27;22(11):4081. doi: 10.3390/s22114081.
7
Efficient architecture for deep neural networks with heterogeneous sensitivity.具有异质敏感性的深度神经网络的高效架构。
Neural Netw. 2021 Feb;134:95-106. doi: 10.1016/j.neunet.2020.10.017. Epub 2020 Nov 10.
8
Deep Sparse Learning for Automatic Modulation Classification Using Recurrent Neural Networks.基于递归神经网络的自动调制分类的深度稀疏学习。
Sensors (Basel). 2021 Sep 25;21(19):6410. doi: 10.3390/s21196410.
9
Transformed ℓ regularization for learning sparse deep neural networks.ℓ 正则化变换在稀疏深度神经网络学习中的应用。
Neural Netw. 2019 Nov;119:286-298. doi: 10.1016/j.neunet.2019.08.015. Epub 2019 Aug 27.
10
A Novel Deep-Learning Model Compression Based on Filter-Stripe Group Pruning and Its IoT Application.一种基于滤波带分组剪枝的新型深度学习模型压缩及其在物联网中的应用。
Sensors (Basel). 2022 Jul 27;22(15):5623. doi: 10.3390/s22155623.

引用本文的文献

1
Efficient adaptation of deep neural networks for semantic segmentation in space applications.深度神经网络在空间应用中语义分割的高效适配。
Sci Rep. 2025 May 23;15(1):18046. doi: 10.1038/s41598-025-99192-5.