IEEE Trans Neural Netw Learn Syst. 2016 Sep;27(9):1933-46. doi: 10.1109/TNNLS.2015.2465178. Epub 2015 Aug 20.
This paper addresses the robust low-rank tensor recovery problems. Tensor recovery aims at reconstructing a low-rank tensor from some linear measurements, which finds applications in image processing, pattern recognition, multitask learning, and so on. In real-world applications, data might be contaminated by sparse gross errors. However, the existing approaches may not be very robust to outliers. To resolve this problem, this paper proposes approaches based on the regularized redescending M-estimators, which have been introduced in robust statistics. The robustness of the proposed approaches is achieved by the regularized redescending M-estimators. However, the nonconvexity also leads to a computational difficulty. To handle this problem, we develop algorithms based on proximal and linearized block coordinate descent methods. By explicitly deriving the Lipschitz constant of the gradient of the data-fitting risk, the descent property of the algorithms is present. Moreover, we verify that the objective functions of the proposed approaches satisfy the Kurdyka-Łojasiewicz property, which establishes the global convergence of the algorithms. The numerical experiments on synthetic data as well as real data verify that our approaches are robust in the presence of outliers and still effective in the absence of outliers.
本文针对鲁棒低秩张量恢复问题展开研究。张量恢复旨在从某些线性测量中重建一个低秩张量,它在图像处理、模式识别、多任务学习等领域有应用。在实际应用中,数据可能会受到稀疏异常值的污染。然而,现有的方法可能对异常值不太鲁棒。为了解决这个问题,本文提出了基于正则化下降 M 估计器的方法,这些方法在稳健统计学中已有介绍。所提出方法的鲁棒性是通过正则化下降 M 估计器实现的。然而,非凸性也导致了计算上的困难。为了解决这个问题,我们开发了基于近端和线性化块坐标下降方法的算法。通过明确推导数据拟合风险梯度的 Lipschitz 常数,给出了算法的下降性质。此外,我们验证了所提出方法的目标函数满足 Kurdyka-Łojasiewicz 性质,这保证了算法的全局收敛性。在合成数据和真实数据上的数值实验验证了我们的方法在存在异常值的情况下具有鲁棒性,并且在不存在异常值的情况下仍然有效。