Dhir Chandra Shekhar, Lee Soo-Young
Department of Bio and Brain Engineering, Brain Science Research Center, Korea Advanced Institute of Science and Technology, Daejeon, Korea.
IEEE Trans Neural Netw. 2011 Jun;22(6):845-57. doi: 10.1109/TNN.2011.2122266. Epub 2011 Apr 25.
A conventional linear model based on Negentropy maximization extracts statistically independent latent variables which may not be optimal to give a discriminant model with good classification performance. In this paper, a single-stage linear semisupervised extraction of discriminative independent features is proposed. Discriminant independent component analysis (dICA) presents a framework of linearly projecting multivariate data to a lower dimension where the features are maximally discriminant with minimal redundancy. The optimization problem is formulated as the maximization of linear summation of Negentropy and weighted functional measure of classification. Motivated by independence among extracted features, Fisher linear discriminant is used as the functional measure of classification. Experimental results show improved classification performance when dICA features are used for recognition tasks in comparison to unsupervised (principal component analysis and ICA) and supervised feature extraction techniques like linear discriminant analysis (LDA), conditional ICA, and those based on information theoretic learning approaches. dICA features also give reduced data reconstruction error in comparison to LDA and ICA method based on Negentropy maximization.
基于负熵最大化的传统线性模型提取的统计独立潜在变量,可能并非给出具有良好分类性能判别模型的最优选择。本文提出了一种判别性独立特征的单阶段线性半监督提取方法。判别性独立成分分析(dICA)提出了一个将多变量数据线性投影到低维空间的框架,其中特征具有最大判别性且冗余最小。优化问题被表述为负熵与加权分类功能测度的线性和最大化。受提取特征间独立性的启发,费舍尔线性判别被用作分类的功能测度。实验结果表明,与无监督(主成分分析和ICA)以及监督特征提取技术(如线性判别分析(LDA)、条件ICA和基于信息论学习方法的技术)相比,当使用dICA特征进行识别任务时,分类性能有所提高。与基于负熵最大化的LDA和ICA方法相比,dICA特征还能降低数据重构误差。