IEEE Trans Neural Netw Learn Syst. 2013 Mar;24(3):383-96. doi: 10.1109/TNNLS.2012.2235082.
In this paper, we propose a nonconvex framework to learn the essential low-rank structure from corrupted data. Different from traditional approaches, which directly utilizes convex norms to measure the sparseness, our method introduces more reasonable nonconvex measurements to enhance the sparsity in both the intrinsic low-rank structure and the sparse corruptions. We will, respectively, introduce how to combine the widely used ℓp norm (0 < p < 1) and log-sum term into the framework of low-rank structure learning. Although the proposed optimization is no longer convex, it still can be effectively solved by a majorization-minimization (MM)-type algorithm, with which the nonconvex objective function is iteratively replaced by its convex surrogate and the nonconvex problem finally falls into the general framework of reweighed approaches. We prove that the MM-type algorithm can converge to a stationary point after successive iterations. The proposed model is applied to solve two typical problems: robust principal component analysis and low-rank representation. Experimental results on low-rank structure learning demonstrate that our nonconvex heuristic methods, especially the log-sum heuristic recovery algorithm, generally perform much better than the convex-norm-based method (0 < p < 1) for both data with higher rank and with denser corruptions.
在本文中,我们提出了一种非凸框架,从受污染的数据中学习基本的低秩结构。与传统方法直接利用凸范数来衡量稀疏度不同,我们的方法引入了更合理的非凸度量,以增强内在低秩结构和稀疏污染中的稀疏性。我们将分别介绍如何将广泛使用的ℓp 范数(0 < p < 1)和对数和项引入到低秩结构学习的框架中。虽然所提出的优化不再是凸的,但它仍然可以通过主化-最小化(MM)型算法有效地解决,该算法将非凸目标函数迭代地替换为其凸逼近,并且非凸问题最终落入了重新加权方法的一般框架中。我们证明了 MM 型算法在连续迭代后可以收敛到一个稳定点。所提出的模型应用于解决两个典型问题:鲁棒主成分分析和低秩表示。低秩结构学习的实验结果表明,对于具有更高秩和更密集污染的数据,我们的非凸启发式方法,特别是对数和启发式恢复算法,通常比基于凸范数(0 < p < 1)的方法表现得更好。