IEEE Trans Neural Netw Learn Syst. 2020 Feb;31(2):433-444. doi: 10.1109/TNNLS.2019.2904701. Epub 2019 May 20.
Linear discriminant analysis (LDA) is the most widely used supervised dimensionality reduction approach. After removing the null space of the total scatter matrix S via principal component analysis (PCA), the LDA algorithm can avoid the small sample size problem. Most existing supervised dimensionality reduction methods extract the principal component of data first, and then conduct LDA on it. However, "most variance" is very often the most important, but not always in PCA. Thus, this two-step strategy may not be able to obtain the most discriminant information for classification tasks. Different from traditional approaches which conduct PCA and LDA in sequence, we propose a novel method referred to as joint principal component and discriminant analysis (JPCDA) for dimensionality reduction. Using this method, we are able to not only avoid the small sample size problem but also extract discriminant information for classification tasks. An iterative optimization algorithm is proposed to solve the method. To validate the efficacy of the proposed method, we perform extensive experiments on several benchmark data sets in comparison with some state-of-the-art dimensionality reduction methods. A large number of experimental results illustrate that the proposed method has quite promising classification performance.
线性判别分析(LDA)是最广泛使用的监督降维方法。在通过主成分分析(PCA)去除总散布矩阵 S 的零空间之后,LDA 算法可以避免小样本量问题。大多数现有的监督降维方法首先提取数据的主成分,然后对其进行 LDA。然而,“最大方差”通常是最重要的,但在 PCA 中并非总是如此。因此,这种两步策略可能无法为分类任务获得最具判别性的信息。与传统的依次进行 PCA 和 LDA 的方法不同,我们提出了一种新的方法,称为联合主成分和判别分析(JPCDA),用于降维。使用这种方法,我们不仅能够避免小样本量问题,还能够提取用于分类任务的判别信息。提出了一种迭代优化算法来求解该方法。为了验证所提出方法的有效性,我们在几个基准数据集上与一些最先进的降维方法进行了广泛的实验比较。大量实验结果表明,所提出的方法具有相当有前途的分类性能。