• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

基于张量树分解的 3DCNN 压缩。

Compressing 3DCNNs based on tensor train decomposition.

机构信息

School of Electronic and Information Engineering, Xi'an Jiaotong University, Xi'an 710049, China.

Department of Precision Instrumentation, Center for Brain Inspired Computing Research and Beijing Innovation Center for Future Chip, Tsinghua University, Beijing 100084, China.

出版信息

Neural Netw. 2020 Nov;131:215-230. doi: 10.1016/j.neunet.2020.07.028. Epub 2020 Aug 7.

DOI:10.1016/j.neunet.2020.07.028
PMID:32805632
Abstract

Three-dimensional convolutional neural networks (3DCNNs) have been applied in many tasks, e.g., video and 3D point cloud recognition. However, due to the higher dimension of convolutional kernels, the space complexity of 3DCNNs is generally larger than that of traditional two-dimensional convolutional neural networks (2DCNNs). To miniaturize 3DCNNs for the deployment in confining environments such as embedded devices, neural network compression is a promising approach. In this work, we adopt the tensor train (TT) decomposition, a straightforward and simple in situ training compression method, to shrink the 3DCNN models. Through proposing tensorizing 3D convolutional kernels in TT format, we investigate how to select appropriate TT ranks for achieving higher compression ratio. We have also discussed the redundancy of 3D convolutional kernels for compression, core significance and future directions of this work, as well as the theoretical computation complexity versus practical executing time of convolution in TT. In the light of multiple contrast experiments based on VIVA challenge, UCF11, UCF101, and ModelNet40 datasets, we conclude that TT decomposition can compress 3DCNNs by around one hundred times without significant accuracy loss, which will enable its applications in extensive real world scenarios.

摘要

三维卷积神经网络(3DCNN)已被应用于许多任务,例如视频和 3D 点云识别。然而,由于卷积核的维度更高,3DCNN 的空间复杂度通常大于传统的二维卷积神经网络(2DCNN)。为了使 3DCNN 小型化,以便部署在嵌入式设备等受限环境中,神经网络压缩是一种很有前途的方法。在这项工作中,我们采用张量分解(TT),一种简单直接的原位训练压缩方法,来压缩 3DCNN 模型。通过以 TT 格式张量化 3D 卷积核,我们研究了如何选择适当的 TT 阶数来实现更高的压缩比。我们还讨论了 3D 卷积核压缩的冗余性、这项工作的核心意义和未来方向,以及 TT 中的卷积理论计算复杂度与实际执行时间。根据基于 VIVA 挑战、UCF11、UCF101 和 ModelNet40 数据集的多项对比实验,我们得出结论,TT 分解可以将 3DCNN 压缩约一百倍,而不会显著降低精度,这将使其能够在广泛的实际场景中得到应用。

相似文献

1
Compressing 3DCNNs based on tensor train decomposition.基于张量树分解的 3DCNN 压缩。
Neural Netw. 2020 Nov;131:215-230. doi: 10.1016/j.neunet.2020.07.028. Epub 2020 Aug 7.
2
QTTNet: Quantized tensor train neural networks for 3D object and video recognition.QTTNet:用于3D物体和视频识别的量化张量列车神经网络
Neural Netw. 2021 Sep;141:420-432. doi: 10.1016/j.neunet.2021.05.034. Epub 2021 Jun 5.
3
Nonlinear tensor train format for deep neural network compression.非线性张量火车格式用于深度神经网络压缩。
Neural Netw. 2021 Dec;144:320-333. doi: 10.1016/j.neunet.2021.08.028. Epub 2021 Sep 8.
4
Hybrid tensor decomposition in neural network compression.神经网络压缩中的混合张量分解。
Neural Netw. 2020 Dec;132:309-320. doi: 10.1016/j.neunet.2020.09.006. Epub 2020 Sep 19.
5
Kronecker CP Decomposition With Fast Multiplication for Compressing RNNs.Kronecker CP 分解与快速乘法在 RNN 压缩中的应用。
IEEE Trans Neural Netw Learn Syst. 2023 May;34(5):2205-2219. doi: 10.1109/TNNLS.2021.3105961. Epub 2023 May 2.
6
ADA-Tucker: Compressing deep neural networks via adaptive dimension adjustment tucker decomposition.ADA-Tucker:通过自适应维调整 Tucker 分解压缩深度神经网络。
Neural Netw. 2019 Feb;110:104-115. doi: 10.1016/j.neunet.2018.10.016. Epub 2018 Nov 13.
7
Improving efficiency in convolutional neural networks with multilinear filters.利用多元线性滤波器提高卷积神经网络的效率。
Neural Netw. 2018 Sep;105:328-339. doi: 10.1016/j.neunet.2018.05.017. Epub 2018 Jun 7.
8
Compression of Deep Neural Networks based on quantized tensor decomposition to implement on reconfigurable hardware platforms.基于张量分解量化的深度神经网络压缩及其在可重构硬件平台上的实现。
Neural Netw. 2022 Jun;150:350-363. doi: 10.1016/j.neunet.2022.02.024. Epub 2022 Mar 8.
9
A New Deep-Learning Method for Human Activity Recognition.一种新的人类活动识别深度学习方法。
Sensors (Basel). 2023 Mar 4;23(5):2816. doi: 10.3390/s23052816.
10
Block-term tensor neural networks.块张量神经网络。
Neural Netw. 2020 Oct;130:11-21. doi: 10.1016/j.neunet.2020.05.034. Epub 2020 Jun 7.

引用本文的文献

1
A Comprehensive Review of Hardware Acceleration Techniques and Convolutional Neural Networks for EEG Signals.硬件加速技术与 EEG 信号卷积神经网络的全面综述
Sensors (Basel). 2024 Sep 7;24(17):5813. doi: 10.3390/s24175813.
2
Enhancing Human Activity Recognition through Integrated Multimodal Analysis: A Focus on RGB Imaging, Skeletal Tracking, and Pose Estimation.通过集成多模态分析增强人类活动识别:重点关注 RGB 成像、骨骼跟踪和姿势估计。
Sensors (Basel). 2024 Jul 17;24(14):4646. doi: 10.3390/s24144646.
3
Compact Neural Architecture Designs by Tensor Representations.
基于张量表示的紧凑型神经架构设计
Front Artif Intell. 2022 Mar 8;5:728761. doi: 10.3389/frai.2022.728761. eCollection 2022.