IEEE Trans Neural Netw Learn Syst. 2018 May;29(5):1998-2011. doi: 10.1109/TNNLS.2017.2690379. Epub 2017 Apr 17.
The growing interests in multiway data analysis and deep learning have drawn tensor factorization (TF) and neural network (NN) as the crucial topics. Conventionally, the NN model is estimated from a set of one-way observations. Such a vectorized NN is not generalized for learning the representation from multiway observations. The classification performance using vectorized NN is constrained, because the temporal or spatial information in neighboring ways is disregarded. More parameters are required to learn the complicated data structure. This paper presents a new tensor-factorized NN (TFNN), which tightly integrates TF and NN for multiway feature extraction and classification under a unified discriminative objective. This TFNN is seen as a generalized NN, where the affine transformation in an NN is replaced by the multilinear and multiway factorization for tensor-based NN. The multiway information is preserved through layerwise factorization. Tucker decomposition and nonlinear activation are performed in each hidden layer. The tensor-factorized error backpropagation is developed to train TFNN with the limited parameter size and computation time. This TFNN can be further extended to realize the convolutional TFNN (CTFNN) by looking at small subtensors through the factorized convolution. Experiments on real-world classification tasks demonstrate that TFNN and CTFNN attain substantial improvement when compared with an NN and a convolutional NN, respectively.
多向数据分析和深度学习的日益关注吸引了张量分解 (TF) 和神经网络 (NN) 作为关键主题。传统上,NN 模型是从一组单向观测中估计的。这种向量化的 NN 不能推广到从多向观测中学习表示。使用向量化 NN 的分类性能受到限制,因为相邻方式中的时间或空间信息被忽略了。需要更多的参数来学习复杂的数据结构。本文提出了一种新的张量分解神经网络 (TFNN),它在统一的判别目标下紧密结合 TF 和 NN 进行多向特征提取和分类。这种 TFNN 被视为广义的 NN,其中 NN 中的仿射变换被用于基于张量的 NN 的多线性和多向分解所取代。通过分层分解保留了多向信息。在每个隐藏层中执行 Tucker 分解和非线性激活。开发了张量分解误差反向传播,以便在有限的参数大小和计算时间下训练 TFNN。通过因子分解卷积,可以进一步扩展此 TFNN 以实现卷积张量分解神经网络 (CTFNN),从而查看小的子张量。在实际分类任务上的实验表明,与 NN 和卷积 NN 相比,TFNN 和 CTFNN 分别取得了实质性的改进。