• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

相似文献

1
Learning Compressed Transforms with Low Displacement Rank.学习具有低位移秩的压缩变换。
Adv Neural Inf Process Syst. 2018 Dec;2018:9052-9060.
2
Theory of deep convolutional neural networks: Downsampling.深度卷积神经网络理论:下采样。
Neural Netw. 2020 Apr;124:319-327. doi: 10.1016/j.neunet.2020.01.018. Epub 2020 Jan 25.
3
Holistic CNN Compression via Low-Rank Decomposition with Knowledge Transfer.通过低秩分解与知识迁移实现整体卷积神经网络压缩
IEEE Trans Pattern Anal Mach Intell. 2019 Dec;41(12):2889-2905. doi: 10.1109/TPAMI.2018.2873305. Epub 2018 Oct 1.
4
ChannelNets: Compact and Efficient Convolutional Neural Networks via Channel-Wise Convolutions.通道网络:通过通道卷积实现紧凑高效的卷积神经网络。
IEEE Trans Pattern Anal Mach Intell. 2021 Aug;43(8):2570-2581. doi: 10.1109/TPAMI.2020.2975796. Epub 2021 Jul 1.
5
Low-Rank Deep Convolutional Neural Network for Multitask Learning.低秩深度卷积神经网络的多任务学习
Comput Intell Neurosci. 2019 May 20;2019:7410701. doi: 10.1155/2019/7410701. eCollection 2019.
6
Learning Fast Algorithms for Linear Transforms Using Butterfly Factorizations.利用蝶形分解学习线性变换的快速算法。
Proc Mach Learn Res. 2019 Jun;97:1517-1527.
7
Compact Neural Architecture Designs by Tensor Representations.基于张量表示的紧凑型神经架构设计
Front Artif Intell. 2022 Mar 8;5:728761. doi: 10.3389/frai.2022.728761. eCollection 2022.
8
The Vapnik-Chervonenkis dimension of graph and recursive neural networks.图和递归神经网络的 Vapnik-Chervonenkis 维数。
Neural Netw. 2018 Dec;108:248-259. doi: 10.1016/j.neunet.2018.08.010. Epub 2018 Sep 1.
9
Neural Network Layer Algebra: A Framework to Measure Capacity and Compression in Deep Learning.神经网络层代数:一种用于衡量深度学习中容量和压缩的框架。
IEEE Trans Neural Netw Learn Syst. 2024 Aug;35(8):10380-10393. doi: 10.1109/TNNLS.2023.3241100. Epub 2024 Aug 5.
10
Image Classification Based on Light Convolutional Neural Network Using Pulse Couple Neural Network.基于脉冲偶联神经网络的光卷积神经网络图像分类
Comput Intell Neurosci. 2023 Mar 14;2023:7371907. doi: 10.1155/2023/7371907. eCollection 2023.

引用本文的文献

1
Learning Fast Algorithms for Linear Transforms Using Butterfly Factorizations.利用蝶形分解学习线性变换的快速算法。
Proc Mach Learn Res. 2019 Jun;97:1517-1527.

本文引用的文献

1
Understanding Image Representations by Measuring Their Equivariance and Equivalence.通过测量图像的等变性和等效性来理解图像表示
Int J Comput Vis. 2019;127(5):456-476. doi: 10.1007/s11263-018-1098-y. Epub 2018 May 18.
2
A two-pronged progress in structured dense matrixvector multiplication.结构化密集矩阵向量乘法的双管齐下进展。
Proc Annu ACM SIAM Symp Discret Algorithms. 2018 Jan;2018:1060-1079.
3
Learning, invariance, and generalization in high-order neural networks.高阶神经网络中的学习、不变性与泛化
Appl Opt. 1987 Dec 1;26(23):4972-8. doi: 10.1364/AO.26.004972.
4
Almost linear VC-dimension bounds for piecewise polynomial networks.分段多项式网络的近似线性VC维界
Neural Comput. 1998 Nov 15;10(8):2159-73. doi: 10.1162/089976698300017016.

学习具有低位移秩的压缩变换。

Learning Compressed Transforms with Low Displacement Rank.

作者信息

Thomas Anna T, Gu Albert, Dao Tri, Rudra Atri, Ré Christopher

机构信息

Department of Computer Science, Stanford University.

Department of Computer Science and Engineering, University at Buffalo, SUNY.

出版信息

Adv Neural Inf Process Syst. 2018 Dec;2018:9052-9060.

PMID:31130799
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC6534145/
Abstract

The low displacement rank (LDR) framework for structured matrices represents a matrix through two displacement operators and a low-rank residual. Existing use of LDR matrices in deep learning has applied fixed displacement operators encoding forms of shift invariance akin to convolutions. We introduce a rich class of LDR matrices with more general displacement operators, and explicitly learn over both the operators and the low-rank component. This class generalizes several previous constructions while preserving compression and efficient computation. We prove bounds on the VC dimension of multi-layer neural networks with structured weight matrices and show empirically that our compact parameterization can reduce the sample complexity of learning. When replacing weight layers in fully-connected, convolutional, and recurrent neural networks for image classification and language modeling tasks, our new classes exceed the accuracy of existing compression approaches, and on some tasks even outperform general unstructured layers while using more than 20X fewer parameters.

摘要

用于结构化矩阵的低位移秩(LDR)框架通过两个位移算子和一个低秩残差来表示矩阵。LDR矩阵在深度学习中的现有应用采用了固定的位移算子,这些算子编码了类似于卷积的平移不变形式。我们引入了一类丰富的具有更一般位移算子的LDR矩阵,并对算子和低秩分量进行显式学习。这类矩阵在保留压缩和高效计算的同时,推广了先前的几种构造。我们证明了具有结构化权重矩阵的多层神经网络的VC维数界,并通过实验表明,我们的紧凑参数化可以降低学习的样本复杂度。在用于图像分类和语言建模任务的全连接、卷积和循环神经网络中替换权重层时,我们的新类超过了现有压缩方法的准确率,并且在某些任务上,即使使用的参数减少了20倍以上,也能优于一般的非结构化层。