• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

具有生成网络先验的尖峰矩阵模型中不存在统计计算差距。

No Statistical-Computational Gap in Spiked Matrix Models with Generative Network Priors.

作者信息

Cocola Jorio, Hand Paul, Voroninski Vladislav

机构信息

Department of Mathematics, Northeastern University, Boston, MA 02115, USA.

Khoury College of Computer Sciences, Northeastern University, Boston, MA 02115, USA.

出版信息

Entropy (Basel). 2021 Jan 16;23(1):115. doi: 10.3390/e23010115.

DOI:10.3390/e23010115
PMID:33467175
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC7830301/
Abstract

We provide a non-asymptotic analysis of the spiked Wishart and Wigner matrix models with a generative neural network prior. Spiked random matrices have the form of a rank-one signal plus noise and have been used as models for high dimensional Principal Component Analysis (PCA), community detection and synchronization over groups. Depending on the prior imposed on the spike, these models can display a statistical-computational gap between the information theoretically optimal reconstruction error that can be achieved with unbounded computational resources and the sub-optimal performances of currently known polynomial time algorithms. These gaps are believed to be fundamental, as in the emblematic case of Sparse PCA. In stark contrast to such cases, we show that there is no statistical-computational gap under a generative network prior, in which the spike lies on the range of a generative neural network. Specifically, we analyze a gradient descent method for minimizing a nonlinear least squares objective over the range of an expansive-Gaussian neural network and show that it can recover in polynomial time an estimate of the underlying spike with a rate-optimal sample complexity and dependence on the noise level.

摘要

我们对具有生成神经网络先验的尖峰威沙特矩阵和维格纳矩阵模型进行了非渐近分析。尖峰随机矩阵具有秩一信号加噪声的形式,并已被用作高维主成分分析(PCA)、社区检测和群体同步的模型。根据施加在尖峰上的先验条件,这些模型在理论上可通过无界计算资源实现的信息最优重构误差与当前已知多项式时间算法的次优性能之间可能会显示出统计计算差距。人们认为这些差距是根本性的,就像稀疏PCA的典型案例一样。与此类情况形成鲜明对比的是,我们表明在生成网络先验条件下不存在统计计算差距,在这种先验条件下,尖峰位于生成神经网络的范围内。具体而言,我们分析了一种梯度下降方法,用于在扩展高斯神经网络的范围内最小化非线性最小二乘目标,并表明它可以在多项式时间内以速率最优的样本复杂度和对噪声水平的依赖性恢复基础尖峰的估计值。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/92fb/7830301/f01cc5c068e1/entropy-23-00115-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/92fb/7830301/5df7838b49ba/entropy-23-00115-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/92fb/7830301/f01cc5c068e1/entropy-23-00115-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/92fb/7830301/5df7838b49ba/entropy-23-00115-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/92fb/7830301/f01cc5c068e1/entropy-23-00115-g002.jpg

相似文献

1
No Statistical-Computational Gap in Spiked Matrix Models with Generative Network Priors.具有生成网络先验的尖峰矩阵模型中不存在统计计算差距。
Entropy (Basel). 2021 Jan 16;23(1):115. doi: 10.3390/e23010115.
2
Asymptotic Theory of Eigenvectors for Random Matrices with Diverging Spikes.具有发散尖峰的随机矩阵特征向量的渐近理论
J Am Stat Assoc. 2022;117(538):996-1009. doi: 10.1080/01621459.2020.1840990. Epub 2020 Dec 8.
3
Singular vectors of sums of rectangular random matrices and optimal estimation of high-rank signals: The extensive spike model.矩形随机矩阵之和的奇异向量与高秩信号的最优估计:广义尖峰模型
Phys Rev E. 2023 Nov;108(5-1):054129. doi: 10.1103/PhysRevE.108.054129.
4
Optimal Estimation and Rank Detection for Sparse Spiked Covariance Matrices.稀疏尖峰协方差矩阵的最优估计与秩检测
Probab Theory Relat Fields. 2015 Apr 1;161(3-4):781-815. doi: 10.1007/s00440-014-0562-z.
5
Fundamental limits in structured principal component analysis and how to reach them.结构化主成分分析的基本限制以及如何达到这些限制。
Proc Natl Acad Sci U S A. 2023 Jul 25;120(30):e2302028120. doi: 10.1073/pnas.2302028120. Epub 2023 Jul 18.
6
Layer adaptive node selection in Bayesian neural networks: Statistical guarantees and implementation details.贝叶斯神经网络中的层自适应节点选择:统计保证与实现细节。
Neural Netw. 2023 Oct;167:309-330. doi: 10.1016/j.neunet.2023.08.029. Epub 2023 Aug 22.
7
Back-Propagation Learning in Deep Spike-By-Spike Networks.深度逐脉冲网络中的反向传播学习
Front Comput Neurosci. 2019 Aug 13;13:55. doi: 10.3389/fncom.2019.00055. eCollection 2019.
8
Low-rank tensor assisted K-space generative model for parallel imaging reconstruction.基于低秩张量辅助的 K 空间生成模型的并行成像重建。
Magn Reson Imaging. 2023 Nov;103:198-207. doi: 10.1016/j.mri.2023.07.004. Epub 2023 Jul 22.
9
Homotopic Gradients of Generative Density Priors for MR Image Reconstruction.基于生成密度先验的同伦梯度在磁共振图像重建中的应用。
IEEE Trans Med Imaging. 2021 Dec;40(12):3265-3278. doi: 10.1109/TMI.2021.3081677. Epub 2021 Nov 30.
10
Overlaps between eigenvectors of spiked, correlated random matrices: From matrix principal component analysis to random Gaussian landscapes.尖峰相关随机矩阵特征向量之间的重叠:从矩阵主成分分析到随机高斯景观
Phys Rev E. 2023 Aug;108(2-1):024145. doi: 10.1103/PhysRevE.108.024145.

本文引用的文献

1
NON-UNIQUE GAMES OVER COMPACT GROUPS AND ORIENTATION ESTIMATION IN CRYO-EM.紧致群上的非唯一博弈与冷冻电镜中的取向估计
Inverse Probl. 2020 Jun;36(6). doi: 10.1088/1361-6420/ab7d2c. Epub 2020 Apr 29.
2
DAGAN: Deep De-Aliasing Generative Adversarial Networks for Fast Compressed Sensing MRI Reconstruction.DAGAN:用于快速压缩感知 MRI 重建的深度去混淆生成对抗网络。
IEEE Trans Med Imaging. 2018 Jun;37(6):1310-1321. doi: 10.1109/TMI.2017.2785879.
3
SegAN: Adversarial Network with Multi-scale L Loss for Medical Image Segmentation.
SegAN: 用于医学图像分割的多尺度 L 损失对抗网络。
Neuroinformatics. 2018 Oct;16(3-4):383-392. doi: 10.1007/s12021-018-9377-x.
4
Phase transitions in semidefinite relaxations.半定松弛中的相变。
Proc Natl Acad Sci U S A. 2016 Apr 19;113(16):E2218-23. doi: 10.1073/pnas.1523097113. Epub 2016 Mar 21.
5
Asymptotic analysis of the stochastic block model for modular networks and its algorithmic applications.模块化网络随机块模型的渐近分析及其算法应用。
Phys Rev E Stat Nonlin Soft Matter Phys. 2011 Dec;84(6 Pt 2):066106. doi: 10.1103/PhysRevE.84.066106. Epub 2011 Dec 12.
6
On Consistency and Sparsity for Principal Components Analysis in High Dimensions.高维主成分分析中的一致性与稀疏性
J Am Stat Assoc. 2009 Jun 1;104(486):682-693. doi: 10.1198/jasa.2009.0121.