IEEE Trans Neural Netw Learn Syst. 2016 Apr;27(4):723-35. doi: 10.1109/TNNLS.2015.2422994. Epub 2015 May 6.
Locally linear embedding (LLE) is one of the most well-known manifold learning methods. As the representative linear extension of LLE, orthogonal neighborhood preserving projection (ONPP) has attracted widespread attention in the field of dimensionality reduction. In this paper, a unified sparse learning framework is proposed by introducing the sparsity or L1-norm learning, which further extends the LLE-based methods to sparse cases. Theoretical connections between the ONPP and the proposed sparse linear embedding are discovered. The optimal sparse embeddings derived from the proposed framework can be computed by iterating the modified elastic net and singular value decomposition. We also show that the proposed model can be viewed as a general model for sparse linear and nonlinear (kernel) subspace learning. Based on this general model, sparse kernel embedding is also proposed for nonlinear sparse feature extraction. Extensive experiments on five databases demonstrate that the proposed sparse learning framework performs better than the existing subspace learning algorithm, particularly in the cases of small sample sizes.
局部线性嵌入(LLE)是最著名的流形学习方法之一。作为 LLE 的代表性线性扩展,正交邻域保持投影(ONPP)在降维领域引起了广泛关注。在本文中,通过引入稀疏或 L1-范数学习,提出了一个统一的稀疏学习框架,从而将基于 LLE 的方法进一步扩展到稀疏情况。发现了 ONPP 和所提出的稀疏线性嵌入之间的理论联系。通过迭代修正的弹性网和奇异值分解,可以计算出所提出框架中导出的最优稀疏嵌入。我们还表明,所提出的模型可以看作是稀疏线性和非线性(核)子空间学习的通用模型。基于这个通用模型,也提出了稀疏核嵌入用于非线性稀疏特征提取。在五个数据库上的广泛实验表明,所提出的稀疏学习框架比现有的子空间学习算法表现更好,特别是在小样本量的情况下。