• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

ADA-Tucker:通过自适应维调整 Tucker 分解压缩深度神经网络。

ADA-Tucker: Compressing deep neural networks via adaptive dimension adjustment tucker decomposition.

机构信息

Key Laboratory of Machine Perception (MOE), School of EECS, Peking University, PR China.

出版信息

Neural Netw. 2019 Feb;110:104-115. doi: 10.1016/j.neunet.2018.10.016. Epub 2018 Nov 13.

DOI:10.1016/j.neunet.2018.10.016
PMID:30508807
Abstract

Despite recent success of deep learning models in numerous applications, their widespread use on mobile devices is seriously impeded by storage and computational requirements. In this paper, we propose a novel network compression method called Adaptive Dimension Adjustment Tucker decomposition (ADA-Tucker). With learnable core tensors and transformation matrices, ADA-Tucker performs Tucker decomposition of arbitrary-order tensors. Furthermore, we propose that weight tensors in networks with proper order and balanced dimension are easier to be compressed. Therefore, the high flexibility in decomposition choice distinguishes ADA-Tucker from all previous low-rank models. To compress more, we further extend the model to Shared Core ADA-Tucker (SCADA-Tucker) by defining a shared core tensor for all layers. Our methods require no overhead of recording indices of non-zero elements. Without loss of accuracy, our methods reduce the storage of LeNet-5 and LeNet-300 by ratios of 691× and 233 ×, respectively, significantly outperforming state of the art. The effectiveness of our methods is also evaluated on other three benchmarks (CIFAR-10, SVHN, ILSVRC12) and modern newly deep networks (ResNet, Wide-ResNet).

摘要

尽管深度学习模型在许多应用中取得了近期的成功,但它们在移动设备上的广泛应用受到存储和计算需求的严重阻碍。在本文中,我们提出了一种名为自适应维度调整 Tucker 分解(ADA-Tucker)的新型网络压缩方法。ADA-Tucker 使用可学习的核心张量和变换矩阵,对任意阶张量执行 Tucker 分解。此外,我们提出,具有适当阶数和平衡维度的网络中的权重张量更容易被压缩。因此,在分解选择方面的高度灵活性将 ADA-Tucker 与所有以前的低秩模型区分开来。为了进一步压缩,我们通过为所有层定义一个共享核心张量,将模型进一步扩展为共享核心 ADA-Tucker(SCADA-Tucker)。我们的方法不需要记录非零元素索引的开销。在不损失精度的情况下,我们的方法将 LeNet-5 和 LeNet-300 的存储分别减少了 691 倍和 233 倍,显著优于最新技术。我们的方法的有效性还在其他三个基准(CIFAR-10、SVHN、ILSVRC12)和现代新的深度网络(ResNet、Wide-ResNet)上进行了评估。

相似文献

1
ADA-Tucker: Compressing deep neural networks via adaptive dimension adjustment tucker decomposition.ADA-Tucker:通过自适应维调整 Tucker 分解压缩深度神经网络。
Neural Netw. 2019 Feb;110:104-115. doi: 10.1016/j.neunet.2018.10.016. Epub 2018 Nov 13.
2
Hybrid tensor decomposition in neural network compression.神经网络压缩中的混合张量分解。
Neural Netw. 2020 Dec;132:309-320. doi: 10.1016/j.neunet.2020.09.006. Epub 2020 Sep 19.
3
Tucker network: Expressive power and comparison.塔克网络:表现力与比较。
Neural Netw. 2023 Mar;160:63-83. doi: 10.1016/j.neunet.2022.12.016. Epub 2022 Dec 24.
4
Improving efficiency in convolutional neural networks with multilinear filters.利用多元线性滤波器提高卷积神经网络的效率。
Neural Netw. 2018 Sep;105:328-339. doi: 10.1016/j.neunet.2018.05.017. Epub 2018 Jun 7.
5
Compressing 3DCNNs based on tensor train decomposition.基于张量树分解的 3DCNN 压缩。
Neural Netw. 2020 Nov;131:215-230. doi: 10.1016/j.neunet.2020.07.028. Epub 2020 Aug 7.
6
Deep Learning Model Compression With Rank Reduction in Tensor Decomposition.张量分解中基于秩降低的深度学习模型压缩
IEEE Trans Neural Netw Learn Syst. 2025 Jan;36(1):1315-1328. doi: 10.1109/TNNLS.2023.3330542. Epub 2025 Jan 7.
7
Nonlinear tensor train format for deep neural network compression.非线性张量火车格式用于深度神经网络压缩。
Neural Netw. 2021 Dec;144:320-333. doi: 10.1016/j.neunet.2021.08.028. Epub 2021 Sep 8.
8
MR-NTD: Manifold Regularization Nonnegative Tucker Decomposition for Tensor Data Dimension Reduction and Representation.MR-NTD:张量数据降维和表示的流形正则化非负 Tucker 分解。
IEEE Trans Neural Netw Learn Syst. 2017 Aug;28(8):1787-1800. doi: 10.1109/TNNLS.2016.2545400.
9
Learning a Single Tucker Decomposition Network for Lossy Image Compression with Multiple Bits-Per-Pixel Rates.学习用于多每像素比特率有损图像压缩的单塔克分解网络。
IEEE Trans Image Process. 2020 Jan 9. doi: 10.1109/TIP.2020.2963956.
10
Redundant feature pruning for accelerated inference in deep neural networks.冗余特征剪枝在深度神经网络中的加速推理。
Neural Netw. 2019 Oct;118:148-158. doi: 10.1016/j.neunet.2019.04.021. Epub 2019 May 9.

引用本文的文献

1
Application of entire dental panorama image data in artificial intelligence model for age estimation.全口曲面断层影像数据在人工智能年龄估测模型中的应用。
BMC Oral Health. 2023 Dec 15;23(1):1007. doi: 10.1186/s12903-023-03745-x.
2
HMC: Hybrid model compression method based on layer sensitivity grouping.HMC:基于层敏感分组的混合模型压缩方法。
PLoS One. 2023 Oct 9;18(10):e0292517. doi: 10.1371/journal.pone.0292517. eCollection 2023.
3
The efficacy of supervised learning and semi-supervised learning in diagnosis of impacted third molar on panoramic radiographs through artificial intelligence model.
人工智能模型辅助下的监督学习和半监督学习在全景片诊断下颌阻生第三磨牙中的效能。
Dentomaxillofac Radiol. 2023 Sep;52(6):20230030. doi: 10.1259/dmfr.20230030. Epub 2023 May 16.