IEEE Trans Neural Netw Learn Syst. 2020 Aug;31(8):2752-2763. doi: 10.1109/TNNLS.2019.2906302. Epub 2019 Apr 11.
While being one of the first and most elegant tools for dimensionality reduction, Fisher linear discriminant analysis (FLDA) is not currently considered among the top methods for feature extraction or classification. In this paper, we will review two recent approaches to FLDA, namely, least squares Fisher discriminant analysis (LSFDA) and regularized kernel FDA (RKFDA) and propose deep FDA (DFDA), a straightforward nonlinear extension of LSFDA that takes advantage of the recent advances on deep neural networks. We will compare the performance of RKFDA and DFDA on a large number of two-class and multiclass problems, many of them involving class-imbalanced data sets and some having quite large sample sizes; we will use, for this, the areas under the receiver operating characteristics (ROCs) curve of the classifiers considered. As we shall see, the classification performance of both methods is often very similar and particularly good on imbalanced problems, but building DFDA models is considerably much faster than doing so for RKFDA, particularly in problems with quite large sample sizes.
虽然 Fisher 线性判别分析 (FLDA) 是最早和最优雅的降维工具之一,但它目前并不被认为是特征提取或分类的顶级方法之一。在本文中,我们将回顾 FLDA 的两种最新方法,即最小二乘 Fisher 判别分析 (LSFDA) 和正则化核 FDA (RKFDA),并提出深度 FDA (DFDA),这是 LSFDA 的一种直接的非线性扩展,利用了深度神经网络的最新进展。我们将在大量的两类和多类问题上比较 RKFDA 和 DFDA 的性能,其中许多问题涉及类不平衡数据集,有些问题的样本量相当大;为此,我们将使用所考虑分类器的接收器操作特性 (ROC) 曲线下的面积。正如我们将看到的,这两种方法的分类性能通常非常相似,特别是在不平衡问题上,但构建 DFDA 模型的速度比构建 RKFDA 模型快得多,特别是在样本量相当大的问题上。