IEEE Trans Cybern. 2019 Apr;49(4):1417-1426. doi: 10.1109/TCYB.2018.2802934. Epub 2018 Feb 19.
Tucker tensor decomposition (TD) is widely used for image representation, reconstruction, and learning tasks. Compared to principal component analysis (PCA) models, tensor models retain more 2-D characteristics of images whereas PCA models linearize images. However, traditional TD involves attribute information only and thus does not consider the pairwise similarity information between images. In this paper, we propose a graph-Laplacian tucker tensor decomposition (GLTD) which explores both attributes and pairwise similarity information simultaneously. Generally, GLTD has three main benefits: 1) GLTD reconstruction shows clear robustness against image occlusions/outliers. We provide analysis to show that Laplacian regularization is mainly responsible to this robustness via an out-of-sample GLTD model. To the best of our knowledge, this Laplacian regularization induced robustness of TD has not been studied or emphasized before; 2) GLTD representation performs more regularity, which improves both unsupervised and supervised learning results; and 3) an effective algorithm is derived to solve GLTD problem. Although GLTD is a noncovex problem, the proposed algorithm is shown experimentally to provide a stable/unique solution starting from different random initializations. Experimental results on image reconstruction, data clustering, and classification tasks show the benefits of GLTD.
塔克张量分解(TD)广泛用于图像表示、重建和学习任务。与主成分分析(PCA)模型相比,张量模型保留了更多的图像二维特征,而 PCA 模型则对图像进行线性化。然而,传统的 TD 只涉及属性信息,因此不考虑图像之间的成对相似信息。在本文中,我们提出了一种同时探索属性和成对相似信息的图拉普拉斯塔克张量分解(GLTD)。一般来说,GLTD 具有三个主要优点:1)GLTD 重建对图像遮挡/异常值具有明显的鲁棒性。我们提供了分析结果,表明拉普拉斯正则化主要通过样本外 GLTD 模型对此鲁棒性负责。据我们所知,TD 的这种拉普拉斯正则化诱导的鲁棒性以前尚未被研究或强调过;2)GLTD 表示更具正则性,这提高了无监督和监督学习的结果;3)导出了一种有效的算法来解决 GLTD 问题。尽管 GLTD 是非凸问题,但实验表明,所提出的算法从不同的随机初始化开始,能够提供稳定/唯一的解决方案。图像重建、数据聚类和分类任务的实验结果表明了 GLTD 的优势。