IEEE Trans Image Process. 2018 Mar;27(3):1336-1346. doi: 10.1109/TIP.2017.2777184. Epub 2017 Nov 23.
Recently, many ℓ-norm-based PCA approaches have been developed to improve the robustness of PCA. However, most existing approaches solve the optimal projection matrix by maximizing ℓ-norm-based variance and do not best minimize the reconstruction error, which is the true goal of PCA. Moreover, they do not have rotational invariance. To handle these problems, we propose a generalized robust metric learning for PCA, namely, ℓ,-PCA, which employs ℓ, -norm as the distance metric for reconstruction error. The proposed method not only is robust to outliers but also retains PCA's desirable properties. For example, the solutions are the principal eigenvectors of a robust covariance matrix and the low-dimensional representation have rotational invariance. These properties are not shared by ℓ-norm-based PCA methods. A new iteration algorithm is presented to solve ℓ,-PCA efficiently. Experimental results illustrate that the proposed method is more effective and robust than PCA, PCA-L1 greedy, PCA-L1 nongreedy, and HQ-PCA.
最近,已经开发出了许多基于ℓ-范数的 PCA 方法来提高 PCA 的稳健性。然而,大多数现有的方法通过最大化基于ℓ-范数的方差来求解最优投影矩阵,而没有最好地最小化 PCA 的真正目标——重建误差。此外,它们不具有旋转不变性。为了解决这些问题,我们提出了一种广义稳健度量学习 PCA,即 ℓ,-PCA,它采用 ℓ, -范数作为重建误差的度量。所提出的方法不仅对离群值具有鲁棒性,而且保留了 PCA 的理想特性。例如,解是稳健协方差矩阵的主特征向量,并且低维表示具有旋转不变性。这些性质是基于ℓ-范数的 PCA 方法所不具备的。提出了一种新的迭代算法来有效地求解 ℓ,-PCA。实验结果表明,与 PCA、PCA-L1 贪婪算法、PCA-L1 非贪婪算法和 HQ-PCA 相比,所提出的方法更有效和稳健。