• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

基于核传播的半监督核矩阵学习

Semisupervised kernel matrix learning by kernel propagation.

作者信息

Hu Enliang, Chen Songcan, Zhang Daoqiang, Yin Xuesong

机构信息

Department of Mathematics, Yunnan Normal University, Kunming, China.

出版信息

IEEE Trans Neural Netw. 2010 Nov;21(11):1831-41. doi: 10.1109/TNN.2010.2076301. Epub 2010 Oct 4.

DOI:10.1109/TNN.2010.2076301
PMID:20923733
Abstract

The goal of semisupervised kernel matrix learning (SS-KML) is to learn a kernel matrix on all the given samples on which just a little supervised information, such as class label or pairwise constraint, is provided. Despite extensive research, the performance of SS-KML still leaves some space for improvement in terms of effectiveness and efficiency. For example, a recent pairwise constraints propagation (PCP) algorithm has formulated SS-KML into a semidefinite programming (SDP) problem, but its computation is very expensive, which undoubtedly restricts PCPs scalability in practice. In this paper, a novel algorithm, called kernel propagation (KP), is proposed to improve the comprehensive performance in SS-KML. The main idea of KP is first to learn a small-sized sub-kernel matrix (named seed-kernel matrix) and then propagate it into a larger-sized full-kernel matrix. Specifically, the implementation of KP consists of three stages: 1) separate the supervised sample (sub)set X(l) from the full sample set X; 2) learn a seed-kernel matrix on X(l) through solving a small-scale SDP problem; and 3) propagate the learnt seed-kernel matrix into a full-kernel matrix on X . Furthermore, following the idea in KP, we naturally develop two conveniently realizable out-of-sample extensions for KML: one is batch-style extension, and the other is online-style extension. The experiments demonstrate that KP is encouraging in both effectiveness and efficiency compared with three state-of-the-art algorithms and its related out-of-sample extensions are promising too.

摘要

半监督核矩阵学习(SS-KML)的目标是在所有给定样本上学习一个核矩阵,在这些样本上仅提供少量监督信息,例如类别标签或成对约束。尽管进行了广泛研究,但SS-KML的性能在有效性和效率方面仍有一定提升空间。例如,最近的成对约束传播(PCP)算法已将SS-KML表述为一个半定规划(SDP)问题,但其计算成本非常高,这无疑限制了PCP在实际中的可扩展性。本文提出了一种名为核传播(KP)的新算法,以提高SS-KML的综合性能。KP的主要思想是首先学习一个小尺寸的子核矩阵(称为种子核矩阵),然后将其传播为一个更大尺寸的全核矩阵。具体而言,KP的实现包括三个阶段:1)从全样本集X中分离出监督样本(子)集X(l);2)通过求解一个小规模SDP问题在X(l)上学习一个种子核矩阵;3)将学习到的种子核矩阵传播为X上的全核矩阵。此外,遵循KP中的思想,我们自然地为KML开发了两种易于实现的样本外扩展:一种是批处理式扩展,另一种是在线式扩展。实验表明,与三种最先进的算法相比,KP在有效性和效率方面都令人鼓舞,并且其相关的样本外扩展也很有前景。

相似文献

1
Semisupervised kernel matrix learning by kernel propagation.基于核传播的半监督核矩阵学习
IEEE Trans Neural Netw. 2010 Nov;21(11):1831-41. doi: 10.1109/TNN.2010.2076301. Epub 2010 Oct 4.
2
Design of a multiple kernel learning algorithm for LS-SVM by convex programming.基于凸规划的 LS-SVM 多核学习算法设计。
Neural Netw. 2011 Jun;24(5):476-83. doi: 10.1016/j.neunet.2011.03.009. Epub 2011 Mar 12.
3
A scalable kernel-based semisupervised metric learning algorithm with out-of-sample generalization ability.一种具有样本外泛化能力的可扩展的基于核的半监督度量学习算法。
Neural Comput. 2008 Nov;20(11):2839-61. doi: 10.1162/neco.2008.05-07-528.
4
Semisupervised learning using negative labels.使用负标签的半监督学习。
IEEE Trans Neural Netw. 2011 Mar;22(3):420-32. doi: 10.1109/TNN.2010.2099237. Epub 2011 Jan 13.
5
Efficient hyperkernel learning using second-order cone programming.使用二阶锥规划的高效超核学习
IEEE Trans Neural Netw. 2006 Jan;17(1):48-58. doi: 10.1109/TNN.2005.860848.
6
A kernel approach for semisupervised metric learning.一种用于半监督度量学习的核方法。
IEEE Trans Neural Netw. 2007 Jan;18(1):141-9. doi: 10.1109/TNN.2006.883723.
7
Ideal regularization for learning kernels from labels.从标签中学习核函数的理想正则化方法。
Neural Netw. 2014 Aug;56:22-34. doi: 10.1016/j.neunet.2014.04.003. Epub 2014 May 2.
8
Efficient sparse generalized multiple kernel learning.高效稀疏广义多核学习
IEEE Trans Neural Netw. 2011 Mar;22(3):433-46. doi: 10.1109/TNN.2010.2103571. Epub 2011 Jan 20.
9
Scalable Nonparametric Low-Rank Kernel Learning Using Block Coordinate Descent.基于块坐标下降的可扩展非参数低秩核学习
IEEE Trans Neural Netw Learn Syst. 2015 Sep;26(9):1927-38. doi: 10.1109/TNNLS.2014.2361159. Epub 2014 Oct 17.
10
Approximate kernel competitive learning.近似核竞争学习。
Neural Netw. 2015 Mar;63:117-32. doi: 10.1016/j.neunet.2014.11.003. Epub 2014 Nov 27.