Zhang Hengmin, Gong Chen, Qian Jianjun, Zhang Bob, Xu Chunyan, Yang Jian
IEEE Trans Neural Netw Learn Syst. 2019 Oct;30(10):2916-2925. doi: 10.1109/TNNLS.2019.2900572. Epub 2019 Mar 18.
Recently, there is a rapidly increasing attraction for the efficient recovery of low-rank matrix in computer vision and machine learning. The popular convex solution of rank minimization is nuclear norm-based minimization (NNM), which usually leads to a biased solution since NNM tends to overshrink the rank components and treats each rank component equally. To address this issue, some nonconvex nonsmooth rank (NNR) relaxations have been exploited widely. Different from these convex and nonconvex rank substitutes, this paper first introduces a general and flexible rank relaxation function named weighted NNR relaxation function, which is actually derived from the initial double NNR (DNNR) relaxations, i.e., DNNR relaxation function acts on the nonconvex singular values function (SVF). An iteratively reweighted SVF optimization algorithm with continuation technology through computing the supergradient values to define the weighting vector is devised to solve the DNNR minimization problem, and the closed-form solution of the subproblem can be efficiently obtained by a general proximal operator, in which each element of the desired weighting vector usually satisfies the nondecreasing order. We next prove that the objective function values decrease monotonically, and any limit point of the generated subsequence is a critical point. Combining the Kurdyka-Łojasiewicz property with some milder assumptions, we further give its global convergence guarantee. As an application in the matrix completion problem, experimental results on both synthetic data and real-world data can show that our methods are competitive with several state-of-the-art convex and nonconvex matrix completion methods.
近年来,在计算机视觉和机器学习领域,低秩矩阵的高效恢复受到了越来越广泛的关注。秩最小化的常见凸解是基于核范数的最小化(NNM),由于NNM倾向于过度收缩秩分量并平等对待每个秩分量,所以通常会导致有偏差的解。为了解决这个问题,一些非凸非光滑秩(NNR)松弛方法得到了广泛应用。与这些凸和非凸秩替代方法不同,本文首先引入了一种通用且灵活的秩松弛函数,称为加权NNR松弛函数,它实际上是从初始的双NNR(DNNR)松弛方法推导而来的,即DNNR松弛函数作用于非凸奇异值函数(SVF)。通过计算超梯度值来定义加权向量,设计了一种带有连续技术的迭代加权SVF优化算法来解决DNNR最小化问题,并且可以通过一个通用的近端算子有效地获得子问题的闭式解,其中所需加权向量的每个元素通常满足非递减顺序。接下来,我们证明目标函数值单调递减,并且生成子序列的任何极限点都是一个临界点。结合Kurdyka-Łojasiewicz性质和一些更温和的假设,我们进一步给出了其全局收敛保证。作为矩阵补全问题的一个应用,在合成数据和真实世界数据上的实验结果表明,我们的方法与几种先进的凸和非凸矩阵补全方法具有竞争力。