Phan Anh-Huy, Cichocki Andrzej, Uschmajew Andre, Tichavsky Petr, Luta George, Mandic Danilo P
IEEE Trans Neural Netw Learn Syst. 2020 Nov;31(11):4622-4636. doi: 10.1109/TNNLS.2019.2956926. Epub 2020 Oct 30.
Decompositions of tensors into factor matrices, which interact through a core tensor, have found numerous applications in signal processing and machine learning. A more general tensor model that represents data as an ordered network of subtensors of order-2 or order-3 has, so far, not been widely considered in these fields, although this so-called tensor network (TN) decomposition has been long studied in quantum physics and scientific computing. In this article, we present novel algorithms and applications of TN decompositions, with a particular focus on the tensor train (TT) decomposition and its variants. The novel algorithms developed for the TT decomposition update, in an alternating way, one or several core tensors at each iteration and exhibit enhanced mathematical tractability and scalability for large-scale data tensors. For rigor, the cases of the given ranks, given approximation error, and the given error bound are all considered. The proposed algorithms provide well-balanced TT-decompositions and are tested in the classic paradigms of blind source separation from a single mixture, denoising, and feature extraction, achieving superior performance over the widely used truncated algorithms for TT decomposition.
张量分解为通过核心张量相互作用的因子矩阵,已在信号处理和机器学习中得到广泛应用。一种更通用的张量模型将数据表示为二阶或三阶子张量的有序网络,尽管这种所谓的张量网络(TN)分解在量子物理学和科学计算中已被研究很久,但迄今为止在这些领域尚未得到广泛考虑。在本文中,我们介绍了TN分解的新颖算法及应用,特别关注张量列车(TT)分解及其变体。为TT分解开发的新颖算法在每次迭代中以交替方式更新一个或几个核心张量,并且对于大规模数据张量展现出增强的数学易处理性和可扩展性。为严谨起见,我们考虑了给定秩、给定近似误差和给定误差界的情况。所提出的算法提供了平衡良好的TT分解,并在从单一混合信号进行盲源分离、去噪和特征提取的经典范例中进行了测试,相对于广泛使用的TT分解截断算法具有卓越性能。