IEEE Trans Image Process. 2017 Feb;26(2):684-695. doi: 10.1109/TIP.2016.2621667. Epub 2016 Oct 26.
Recently, L1-norm-based discriminant subspace learning has attracted much more attention in dimensionality reduction and machine learning. However, most existing approaches solve the column vectors of the optimal projection matrix one by one with greedy strategy. Thus, the obtained optimal projection matrix does not necessarily best optimize the corresponding trace ratio objective function, which is the essential criterion function for general supervised dimensionality reduction. In this paper, we propose a non-greedy iterative algorithm to solve the trace ratio form of L1-norm-based linear discriminant analysis. We analyze the convergence of our proposed algorithm in detail. Extensive experiments on five popular image databases illustrate that our proposed algorithm can maximize the objective function value and is superior to most existing L1-LDA algorithms.
最近,基于L1范数的判别子空间学习在降维和机器学习中受到了更多关注。然而,现有的大多数方法采用贪婪策略逐个求解最优投影矩阵的列向量。因此,得到的最优投影矩阵不一定能最好地优化相应的迹比目标函数,而迹比目标函数是一般监督降维的重要准则函数。在本文中,我们提出了一种非贪婪迭代算法来求解基于L1范数的线性判别分析的迹比形式。我们详细分析了所提算法的收敛性。在五个流行图像数据库上进行的大量实验表明,我们提出的算法能够最大化目标函数值,并且优于大多数现有的L1-LDA算法。