• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

通过增强学习大规模核机器的稀疏逼近

Sparse approximation through boosting for learning large scale kernel machines.

作者信息

Sun Ping, Yao Xin

机构信息

Centre of Excellence for Research in Computational Intelligence and Applications (CERCIA), University of Birmingham, Edgbaston, Birmingham B15 2TT, UK.

出版信息

IEEE Trans Neural Netw. 2010 Jun;21(6):883-94. doi: 10.1109/TNN.2010.2044244. Epub 2010 Apr 19.

DOI:10.1109/TNN.2010.2044244
PMID:20409992
Abstract

Recently, sparse approximation has become a preferred method for learning large scale kernel machines. This technique attempts to represent the solution with only a subset of original data points also known as basis vectors, which are usually chosen one by one with a forward selection procedure based on some selection criteria. The computational complexity of several resultant algorithms scales as O(NM(2)) in time and O(NM) in memory, where N is the number of training points and M is the number of basis vectors as well as the steps of forward selection. For some large scale data sets, to obtain a better solution, we are sometimes required to include more basis vectors, which means that M is not trivial in this situation. However, the limited computational resource (e.g., memory) prevents us from including too many vectors. To handle this dilemma, we propose to add an ensemble of basis vectors instead of only one at each forward step. The proposed method, closely related to gradient boosting, could decrease the required number M of forward steps significantly and thus a large fraction of computational cost is saved. Numerical experiments on three large scale regression tasks and a classification problem demonstrate the effectiveness of the proposed approach.

摘要

最近,稀疏逼近已成为学习大规模核机器的首选方法。该技术试图仅用原始数据点的一个子集(也称为基向量)来表示解,这些基向量通常通过基于某些选择标准的前向选择过程逐个选择。几种所得算法的计算复杂度在时间上为O(NM(2)),在内存上为O(NM),其中N是训练点的数量,M是基向量的数量以及前向选择的步数。对于一些大规模数据集,为了获得更好的解,有时需要包含更多的基向量,这意味着在这种情况下M并不小。然而,有限的计算资源(例如内存)阻止我们包含太多向量。为了解决这个困境,我们建议在每个前向步骤中添加一组基向量而不是仅一个。所提出的方法与梯度提升密切相关,可以显著减少所需的前向步骤数M,从而节省很大一部分计算成本。在三个大规模回归任务和一个分类问题上的数值实验证明了所提出方法的有效性。

相似文献

1
Sparse approximation through boosting for learning large scale kernel machines.通过增强学习大规模核机器的稀疏逼近
IEEE Trans Neural Netw. 2010 Jun;21(6):883-94. doi: 10.1109/TNN.2010.2044244. Epub 2010 Apr 19.
2
Stochastic subset selection for learning with kernel machines.用于核机器学习的随机子集选择
IEEE Trans Syst Man Cybern B Cybern. 2012 Jun;42(3):616-26. doi: 10.1109/TSMCB.2011.2171680. Epub 2011 Oct 27.
3
Large-scale maximum margin discriminant analysis using core vector machines.使用核向量机的大规模最大间隔判别分析。
IEEE Trans Neural Netw. 2008 Apr;19(4):610-24. doi: 10.1109/TNN.2007.911746.
4
Boosting method for local learning in statistical pattern recognition.统计模式识别中局部学习的提升方法。
Neural Comput. 2008 Nov;20(11):2792-838. doi: 10.1162/neco.2008.06-07-549.
5
An efficient data preprocessing approach for large scale medical data mining.一种用于大规模医学数据挖掘的高效数据预处理方法。
Technol Health Care. 2015;23(2):153-60. doi: 10.3233/THC-140887.
6
Probabilistic classification vector machines.概率分类向量机
IEEE Trans Neural Netw. 2009 Jun;20(6):901-14. doi: 10.1109/TNN.2009.2014161. Epub 2009 Apr 24.
7
Online learning control using adaptive critic designs with sparse kernel machines.基于稀疏核机器的自适应评论家设计的在线学习控制。
IEEE Trans Neural Netw Learn Syst. 2013 May;24(5):762-75. doi: 10.1109/TNNLS.2012.2236354.
8
Kernel map compression for speeding the execution of kernel-based methods.用于加速基于核方法执行的核映射压缩
IEEE Trans Neural Netw. 2011 Jun;22(6):870-9. doi: 10.1109/TNN.2011.2127485. Epub 2011 May 5.
9
Sparse multiple kernel learning for signal processing applications.稀疏多核学习在信号处理中的应用。
IEEE Trans Pattern Anal Mach Intell. 2010 May;32(5):788-98. doi: 10.1109/TPAMI.2009.98.
10
A fast algorithm for learning a ranking function from large-scale data sets.一种从大规模数据集中学习排序函数的快速算法。
IEEE Trans Pattern Anal Mach Intell. 2008 Jul;30(7):1158-70. doi: 10.1109/TPAMI.2007.70776.