Suppr超能文献

基于迹比最大化的多核维度约减方法。

A trace ratio maximization approach to multiple kernel-based dimensionality reduction.

机构信息

Department of Computing, Hong Kong Polytechnic University, Hunghom, Kowloon, Hong Kong.

出版信息

Neural Netw. 2014 Jan;49:96-106. doi: 10.1016/j.neunet.2013.09.004. Epub 2013 Oct 9.

Abstract

Most dimensionality reduction techniques are based on one metric or one kernel, hence it is necessary to select an appropriate kernel for kernel-based dimensionality reduction. Multiple kernel learning for dimensionality reduction (MKL-DR) has been recently proposed to learn a kernel from a set of base kernels which are seen as different descriptions of data. As MKL-DR does not involve regularization, it might be ill-posed under some conditions and consequently its applications are hindered. This paper proposes a multiple kernel learning framework for dimensionality reduction based on regularized trace ratio, termed as MKL-TR. Our method aims at learning a transformation into a space of lower dimension and a corresponding kernel from the given base kernels among which some may not be suitable for the given data. The solutions for the proposed framework can be found based on trace ratio maximization. The experimental results demonstrate its effectiveness in benchmark datasets, which include text, image and sound datasets, for supervised, unsupervised as well as semi-supervised settings.

摘要

大多数降维技术都是基于一种度量或一种核函数,因此有必要为基于核的降维选择一个合适的核函数。最近提出了基于核的降维的多核学习 (MKL-DR),以便从一组基核中学习一个核,这些基核被视为数据的不同描述。由于 MKL-DR 不涉及正则化,因此在某些条件下可能会出现不适定的情况,从而阻碍了其应用。本文提出了一种基于正则化迹比的多核学习降维框架,称为 MKL-TR。我们的方法旨在从给定的基核中学习到一种将数据转换到低维空间和相应核的方法,其中一些核可能不适合给定的数据。所提出框架的解决方案可以基于迹比最大化来找到。实验结果表明,该方法在基准数据集上的有效性,包括文本、图像和声音数据集,适用于监督、无监督和半监督设置。

相似文献

2
Multiple kernel learning for dimensionality reduction.多核学习的维度约简。
IEEE Trans Pattern Anal Mach Intell. 2011 Jun;33(6):1147-60. doi: 10.1109/TPAMI.2010.183.
5
Graph embedding and extensions: a general framework for dimensionality reduction.图嵌入与扩展:降维的通用框架
IEEE Trans Pattern Anal Mach Intell. 2007 Jan;29(1):40-51. doi: 10.1109/TPAMI.2007.12.
6
Ideal regularization for learning kernels from labels.从标签中学习核函数的理想正则化方法。
Neural Netw. 2014 Aug;56:22-34. doi: 10.1016/j.neunet.2014.04.003. Epub 2014 May 2.
7
Soft margin multiple kernel learning.软间隔多内核学习。
IEEE Trans Neural Netw Learn Syst. 2013 May;24(5):749-61. doi: 10.1109/TNNLS.2012.2237183.
8
Multiple kernel sparse representations for supervised and unsupervised learning.多核稀疏表示的监督和无监督学习。
IEEE Trans Image Process. 2014 Jul;23(7):2905-15. doi: 10.1109/TIP.2014.2322938. Epub 2014 May 9.
9
Training Lp norm multiple kernel learning in the primal.在原语中训练 Lp 范数多核学习。
Neural Netw. 2013 Oct;46:172-82. doi: 10.1016/j.neunet.2013.05.003. Epub 2013 May 24.
10
Partially supervised speaker clustering.部分监督的说话人聚类。
IEEE Trans Pattern Anal Mach Intell. 2012 May;34(5):959-71. doi: 10.1109/TPAMI.2011.174.

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验