Institute of Automation, Chinese Academy of Sciences, 95 Zhongguancun East Road Beijing 100190, PR China.
Neural Netw. 2012 Oct;34:56-64. doi: 10.1016/j.neunet.2012.06.001. Epub 2012 Jul 10.
Linear Discriminant Analysis (LDA) is an important dimensionality reduction algorithm, but its performance is usually limited on multi-class data. Such limitation is incurred by the fact that LDA actually maximizes the average divergence among classes, whereby similar classes with smaller divergence tend to be merged in the subspace. To address this problem, we propose a novel dimensionality reduction method called Maxi-Min Discriminant Analysis (MMDA). In contrast to the traditional LDA, MMDA attempts to find a low-dimensional subspace by maximizing the minimal (worst-case) divergence among classes. This "minimal" setting overcomes the problem of LDA that tends to merge similar classes with smaller divergence when used for multi-class data. We formulate MMDA as a convex problem and further as a large-margin learning problem. One key contribution is that we design an efficient online learning algorithm to solve the involved problem, making the proposed method applicable to large scale data. Experimental results on various datasets demonstrate the efficiency and the efficacy of our proposed method against five other competitive approaches, and the scalability to the data with thousands of classes.
线性判别分析(LDA)是一种重要的降维算法,但它在多类数据上的性能通常受到限制。这种限制是由于 LDA 实际上最大化了类间的平均离散度,从而导致具有较小离散度的相似类倾向于在子空间中合并。为了解决这个问题,我们提出了一种新的降维方法,称为最大最小判别分析(MMDA)。与传统的 LDA 不同,MMDA 通过最大化类间最小(最坏情况)离散度来试图找到一个低维子空间。这种“最小”设置克服了 LDA 在处理多类数据时倾向于合并具有较小离散度的相似类的问题。我们将 MMDA 公式化为凸问题,并进一步将其公式化为大间隔学习问题。一个关键的贡献是,我们设计了一种有效的在线学习算法来解决所涉及的问题,使得所提出的方法适用于大规模数据。在各种数据集上的实验结果表明,与其他五种竞争方法相比,我们提出的方法具有效率和有效性,并且可以扩展到具有数千个类别的数据。