Suppr超能文献

通过特征空间到特征空间距离度量学习实现鲁棒降维。

Robust dimensionality reduction via feature space to feature space distance metric learning.

机构信息

School of Computer Science and Technology, Wuhan University of Science and Technology, Wuhan, China; Hubei Province Key Laboratory of Intelligent Information Processing and Real-time Industrial System, Wuhan, China; Institute of Big Data Science and Engineering, Wuhan University of Science and Technology, Wuhan, China.

School of Computer Science and Technology, Wuhan University of Science and Technology, Wuhan, China; Hubei Province Key Laboratory of Intelligent Information Processing and Real-time Industrial System, Wuhan, China.

出版信息

Neural Netw. 2019 Apr;112:1-14. doi: 10.1016/j.neunet.2019.01.001. Epub 2019 Jan 21.

Abstract

Images are often represented as vectors with high dimensions when involved in classification. As a result, dimensionality reduction methods have to be developed to avoid the curse of dimensionality. Among them, Laplacian eigenmaps (LE) have attracted widespread concentrations. In the original LE, point to point (P2P) distance metric is often adopted for manifold learning. Unfortunately, they show few impacts on robustness to noises. In this paper, a novel supervised dimensionality reduction method, named feature space to feature space distance metric learning (FSDML), is presented. For any point, it can construct a feature space spanned by its k intra-class nearest neighbors, which results in a local projection on its nearest feature space. Thus feature space to feature space (S2S) distance metric will be defined to Euclidean distance between two corresponding projections. On one hand, the proposed S2S distance metric displays superiority on robustness by the local projection. On the other hand, the projection on the nearest feature space contributes to fully mining local geometry information hidden in the original data. Moreover, both class label similarity and dissimilarity are also measured, based on which an intra-class graph and an inter-class graph will be individually modeled. Finally, a subspace can be found for classification by maximizing S2S based manifold to manifold distance and preserving S2S based locality of manifolds, simultaneously. Compared to some state-of-art dimensionality reduction methods, experiments validate the proposed method's performance either on synthesized data sets or on benchmark data sets.

摘要

在分类中,图像通常表示为高维向量。因此,必须开发降维方法来避免维度灾难。其中,拉普拉斯特征映射 (LE) 吸引了广泛的关注。在原始的 LE 中,通常采用点对点 (P2P) 距离度量来进行流形学习。不幸的是,它们对噪声的鲁棒性影响很小。在本文中,提出了一种新的有监督降维方法,称为特征空间到特征空间距离度量学习 (FSDML)。对于任何点,它都可以构建由其 k 个类内最近邻居构成的特征空间,从而在最近的特征空间上进行局部投影。因此,将定义特征空间到特征空间 (S2S) 距离度量为两个对应投影之间的欧几里得距离。一方面,通过局部投影,所提出的 S2S 距离度量在鲁棒性方面具有优势。另一方面,最近特征空间上的投影有助于充分挖掘原始数据中隐藏的局部几何信息。此外,还基于类标签的相似性和相异性进行了度量,基于此,分别对类内图和类间图进行建模。最后,通过最大化基于 S2S 的流形到流形距离并同时保留基于 S2S 的流形局部性,在子空间中找到分类。与一些最先进的降维方法相比,实验验证了该方法在合成数据集和基准数据集上的性能。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验