IEEE Trans Neural Netw Learn Syst. 2014 Apr;25(4):793-805. doi: 10.1109/TNNLS.2013.2281428.
A novel discriminant analysis criterion is derived in this paper under the theoretical framework of Bayes optimality. In contrast to the conventional Fisher's discriminant criterion, the major novelty of the proposed one is the use of L1 norm rather than L2 norm, which makes it less sensitive to the outliers. With the L1-norm discriminant criterion, we propose a new linear discriminant analysis (L1-LDA) method for linear feature extraction problem. To solve the L1-LDA optimization problem, we propose an efficient iterative algorithm, in which a novel surrogate convex function is introduced such that the optimization problem in each iteration is to simply solve a convex programming problem and a close-form solution is guaranteed to this problem. Moreover, we also generalize the L1-LDA method to deal with the nonlinear robust feature extraction problems via the use of kernel trick, and hereafter proposed the L1-norm kernel discriminant analysis (L1-KDA) method. Extensive experiments on simulated and real data sets are conducted to evaluate the effectiveness of the proposed method in comparing with the state-of-the-art methods.
本文在贝叶斯最优性的理论框架下推导出一种新的判别分析准则。与传统的 Fisher 判别准则不同,所提出的准则的主要新颖之处在于使用 L1 范数而不是 L2 范数,这使得它对异常值不那么敏感。基于 L1 范数判别准则,我们提出了一种新的线性判别分析(L1-LDA)方法,用于线性特征提取问题。为了解决 L1-LDA 优化问题,我们提出了一种有效的迭代算法,在该算法中引入了一种新的替代凸函数,使得每个迭代中的优化问题简化为求解一个凸规划问题,并保证该问题有闭式解。此外,我们还通过核技巧将 L1-LDA 方法推广到处理非线性稳健特征提取问题,并提出了 L1 范数核判别分析(L1-KDA)方法。通过在模拟和真实数据集上进行广泛的实验,评估了与最先进的方法相比,所提出的方法的有效性。