Liu Yipeng, Liu Jiani, Zhu Ce
IEEE Trans Neural Netw Learn Syst. 2020 Dec;31(12):5402-5411. doi: 10.1109/TNNLS.2020.2967022. Epub 2020 Nov 30.
The tensor-on-tensor regression can predict a tensor from a tensor, which generalizes most previous multilinear regression approaches, including methods to predict a scalar from a tensor, and a tensor from a scalar. However, the coefficient array could be much higher dimensional due to both high-order predictors and responses in this generalized way. Compared with the current low CANDECOMP/PARAFAC (CP) rank approximation-based method, the low tensor train (TT) approximation can further improve the stability and efficiency of the high or even ultrahigh-dimensional coefficient array estimation. In the proposed low TT rank coefficient array estimation for tensor-on-tensor regression, we adopt a TT rounding procedure to obtain adaptive ranks, instead of selecting ranks by experience. Besides, an l constraint is imposed to avoid overfitting. The hierarchical alternating least square is used to solve the optimization problem. Numerical experiments on a synthetic data set and two real-life data sets demonstrate that the proposed method outperforms the state-of-the-art methods in terms of prediction accuracy with comparable computational complexity, and the proposed method is more computationally efficient when the data are high dimensional with small size in each mode.
张量对张量回归可以从一个张量预测另一个张量,这推广了大多数先前的多线性回归方法,包括从张量预测标量以及从标量预测张量的方法。然而,由于以这种广义方式存在高阶预测变量和响应变量,系数数组可能具有更高的维度。与当前基于低CANDECOMP/PARAFAC(CP)秩近似的方法相比,低张量列车(TT)近似可以进一步提高高维甚至超高维系数数组估计的稳定性和效率。在所提出的用于张量对张量回归的低TT秩系数数组估计中,我们采用TT舍入过程来获得自适应秩,而不是凭经验选择秩。此外,施加l约束以避免过拟合。使用分层交替最小二乘法来解决优化问题。在一个合成数据集和两个实际数据集上的数值实验表明,所提出的方法在预测精度方面优于现有方法,且计算复杂度相当,并且当数据在每个模式下维度高且规模小时,所提出的方法计算效率更高。