• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

用于高维随机张量二分类的计算信息几何

Computational Information Geometry for Binary Classification of High-Dimensional Random Tensors.

作者信息

Pham Gia-Thuy, Boyer Rémy, Nielsen Frank

机构信息

Laboratory of Signals and Systems (L2S), Department of Signals and Statistics, University of Paris-Sud, 91400 Orsay, France.

Computer Science Department LIX, École Polytechnique, 91120 Palaiseau, France.

出版信息

Entropy (Basel). 2018 Mar 17;20(3):203. doi: 10.3390/e20030203.

DOI:10.3390/e20030203
PMID:33265294
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC7512719/
Abstract

Evaluating the performance of Bayesian classification in a high-dimensional random tensor is a fundamental problem, usually difficult and under-studied. In this work, we consider two Signal to Noise Ratio (SNR)-based binary classification problems of interest. Under the alternative hypothesis, i.e., for a non-zero SNR, the observed signals are either a noisy rank- tensor admitting a -order Canonical Polyadic Decomposition (CPD) with large factors of size N q × R , i.e., for 1 ≤ q ≤ Q , where R , N q → ∞ with R 1 / q / N q converge towards a finite constant or a noisy tensor admitting TucKer Decomposition (TKD) of multilinear ( M 1 , … , M Q ) -rank with large factors of size N q × M q , i.e., for 1 ≤ q ≤ Q , where N q , M q → ∞ with M q / N q converge towards a finite constant. The classification of the random entries (coefficients) of the core tensor in the CPD/TKD is hard to study since the exact derivation of the minimal Bayes' error probability is mathematically intractable. To circumvent this difficulty, the Chernoff Upper Bound (CUB) for larger SNR and the Fisher information at low SNR are derived and studied, based on information geometry theory. The tightest CUB is reached for the value minimizing the error exponent, denoted by s ⋆ . In general, due to the asymmetry of the -divergence, the Bhattacharyya Upper Bound (BUB) (that is, the Chernoff Information calculated at s ⋆ = 1 / 2 ) cannot solve this problem effectively. As a consequence, we rely on a costly numerical optimization strategy to find s ⋆ . However, thanks to powerful random matrix theory tools, a simple analytical expression of s ⋆ is provided with respect to the Signal to Noise Ratio (SNR) in the two schemes considered. This work shows that the BUB is the tightest bound at low SNRs. However, for higher SNRs, the latest property is no longer true.

摘要

评估贝叶斯分类在高维随机张量中的性能是一个基本问题,通常既困难又研究不足。在这项工作中,我们考虑了两个基于信噪比(SNR)的二元分类问题。在备择假设下,即对于非零信噪比,观测信号要么是一个噪声秩张量,它允许具有大小为(N_q\times R)的大因子的(q)阶典范多adic分解(CPD),即对于(1\leq q\leq Q),其中(R),(N_q\rightarrow\infty)且(R^{1/q}/N_q)收敛于一个有限常数;要么是一个噪声张量,它允许具有多线性((M_1,\ldots,M_Q))秩且因子大小为(N_q\times M_q)的塔克分解(TKD),即对于(1\leq q\leq Q),其中(N_q),(M_q\rightarrow\infty)且(M_q/N_q)收敛于一个有限常数。CPD/TKD中核心张量的随机元素(系数)的分类很难研究,因为最小贝叶斯错误概率的精确推导在数学上是难以处理的。为了规避这个困难,基于信息几何理论,推导并研究了高信噪比下的切尔诺夫上界(CUB)和低信噪比下的费希尔信息。通过最小化误差指数的值(s^\star)可得到最紧的CUB。一般来说,由于(\alpha)散度的不对称性,巴塔查里亚上界(BUB)(即(s^\star = 1/2)时计算的切尔诺夫信息)不能有效地解决这个问题。因此,我们依靠一种代价高昂的数值优化策略来找到(s^\star)。然而,多亏了强大的随机矩阵理论工具,在所考虑的两种方案中,针对信噪比(SNR)给出了(s^\star)的一个简单解析表达式。这项工作表明,BUB在低信噪比下是最紧的界。然而,对于更高的信噪比,上述性质不再成立。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d51e/7512719/de7abd40791d/entropy-20-00203-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d51e/7512719/ec6f205bc7f6/entropy-20-00203-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d51e/7512719/39de76729646/entropy-20-00203-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d51e/7512719/0be38218fa04/entropy-20-00203-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d51e/7512719/83f2054bf3fe/entropy-20-00203-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d51e/7512719/af32dad513e6/entropy-20-00203-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d51e/7512719/d126ce7f4492/entropy-20-00203-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d51e/7512719/fb90fffb23e6/entropy-20-00203-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d51e/7512719/de7abd40791d/entropy-20-00203-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d51e/7512719/ec6f205bc7f6/entropy-20-00203-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d51e/7512719/39de76729646/entropy-20-00203-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d51e/7512719/0be38218fa04/entropy-20-00203-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d51e/7512719/83f2054bf3fe/entropy-20-00203-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d51e/7512719/af32dad513e6/entropy-20-00203-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d51e/7512719/d126ce7f4492/entropy-20-00203-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d51e/7512719/fb90fffb23e6/entropy-20-00203-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d51e/7512719/de7abd40791d/entropy-20-00203-g008.jpg

相似文献

1
Computational Information Geometry for Binary Classification of High-Dimensional Random Tensors.用于高维随机张量二分类的计算信息几何
Entropy (Basel). 2018 Mar 17;20(3):203. doi: 10.3390/e20030203.
2
Tensor Networks for Latent Variable Analysis: Higher Order Canonical Polyadic Decomposition.用于潜在变量分析的张量网络:高阶典范多adic分解
IEEE Trans Neural Netw Learn Syst. 2020 Jun;31(6):2174-2188. doi: 10.1109/TNNLS.2019.2929063. Epub 2019 Aug 26.
3
Rank-Adaptive Tensor Completion Based on Tucker Decomposition.基于塔克分解的秩自适应张量补全
Entropy (Basel). 2023 Jan 24;25(2):225. doi: 10.3390/e25020225.
4
Noisy Tensor Completion via Low-Rank Tensor Ring.基于低秩张量环的噪声张量补全
IEEE Trans Neural Netw Learn Syst. 2022 Jun 17;PP. doi: 10.1109/TNNLS.2022.3181378.
5
Revisiting Chernoff Information with Likelihood Ratio Exponential Families.用似然比指数族重新审视切尔诺夫信息。
Entropy (Basel). 2022 Oct 1;24(10):1400. doi: 10.3390/e24101400.
6
Accelerated canonical polyadic decomposition using mode reduction.使用模式约简的加速典范张量分解。
IEEE Trans Neural Netw Learn Syst. 2013 Dec;24(12):2051-62. doi: 10.1109/TNNLS.2013.2271507.
7
Learning from Binary Multiway Data: Probabilistic Tensor Decomposition and its Statistical Optimality.从二元多路数据中学习:概率张量分解及其统计最优性。
J Mach Learn Res. 2020 Jul;21.
8
A Low-Rank Tensor Decomposition Model With Factors Prior and Total Variation for Impulsive Noise Removal.一种具有因子先验和全变差的低秩张量分解模型用于去除脉冲噪声。
IEEE Trans Image Process. 2022;31:4776-4789. doi: 10.1109/TIP.2022.3169694. Epub 2022 Jul 15.
9
A ribbon graph derivation of the algebra of functional renormalization for random multi-matrices with multi-trace interactions.具有多迹相互作用的随机多矩阵的泛函重整化代数的带状图推导。
Lett Math Phys. 2022;112(3):58. doi: 10.1007/s11005-022-01546-x. Epub 2022 Jun 11.
10
Tensor decomposition of EEG signals: a brief review.脑电图信号的张量分解:简要综述。
J Neurosci Methods. 2015 Jun 15;248:59-69. doi: 10.1016/j.jneumeth.2015.03.018. Epub 2015 Apr 1.

引用本文的文献

1
An Elementary Introduction to Information Geometry.信息几何基础导论。
Entropy (Basel). 2020 Sep 29;22(10):1100. doi: 10.3390/e22101100.

本文引用的文献

1
Revisiting Chernoff Information with Likelihood Ratio Exponential Families.用似然比指数族重新审视切尔诺夫信息。
Entropy (Basel). 2022 Oct 1;24(10):1400. doi: 10.3390/e24101400.
2
Some mathematical notes on three-mode factor analysis.关于三模式因子分析的一些数学注释。
Psychometrika. 1966 Sep;31(3):279-311. doi: 10.1007/BF02289464.