Ju Fujiao, Sun Yanfeng, Gao Junbin, Hu Yongli, Yin Baocai
IEEE Trans Neural Netw Learn Syst. 2018 Oct;29(10):4579-4592. doi: 10.1109/TNNLS.2017.2739131. Epub 2017 Nov 21.
Dimension reduction for high-order tensors is a challenging problem. In conventional approaches, dimension reduction for higher order tensors is implemented via Tucker decomposition to obtain lower dimensional tensors. This paper introduces a probabilistic vectorial dimension reduction model for tensorial data. The model represents a tensor by using a linear combination of the same order basis tensors, thus it offers a learning approach to directly reduce a tensor to a vector. Under this expression, the projection base of the model is based on the tensor CandeComp/PARAFAC (CP) decomposition and the number of free parameters in the model only grows linearly with the number of modes rather than exponentially. A Bayesian inference has been established via the variational Expectation Maximization (EM) approach. A criterion to set the parameters (a factor number of CP decomposition and the number of extracted features) is empirically given. The model outperforms several existing principal component analysis-based methods and CP decomposition on several publicly available databases in terms of classification and clustering accuracy.
高阶张量的降维是一个具有挑战性的问题。在传统方法中,高阶张量的降维是通过塔克分解来实现的,以获得低维张量。本文介绍了一种用于张量数据的概率向量降维模型。该模型通过相同阶数的基张量的线性组合来表示张量,因此它提供了一种直接将张量降维为向量的学习方法。在此表达式下,模型的投影基基于张量的CandeComp/PARAFAC(CP)分解,并且模型中的自由参数数量仅随模式数量线性增长,而不是指数增长。通过变分期望最大化(EM)方法建立了贝叶斯推理。凭经验给出了设置参数(CP分解的因子数量和提取特征的数量)的标准。在分类和聚类准确性方面,该模型在几个公开可用的数据库上优于几种现有的基于主成分分析的方法和CP分解。