• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

新型可扩展高效在线成对学习算法

New Scalable and Efficient Online Pairwise Learning Algorithm.

作者信息

Gu Bin, Bao Runxue, Zhang Chenkang, Huang Heng

出版信息

IEEE Trans Neural Netw Learn Syst. 2024 Dec;35(12):17099-17110. doi: 10.1109/TNNLS.2023.3299756. Epub 2024 Dec 2.

DOI:10.1109/TNNLS.2023.3299756
PMID:37656641
Abstract

Pairwise learning is an important machine-learning topic with many practical applications. An online algorithm is the first choice for processing streaming data and is preferred for handling large-scale pairwise learning problems. However, existing online pairwise learning algorithms are not scalable and efficient enough for large-scale high-dimensional data, because they were designed based on singly stochastic gradients. To address this challenging problem, in this article, we propose a dynamic doubly stochastic gradient algorithm (D2SG) for online pairwise learning. Especially, only the time and space complexities of are needed for incorporating a new sample, where is the dimensionality of data. This means that our D2SG is much faster and more scalable than the existing online pairwise learning algorithms while the statistical accuracy can be guaranteed through our rigorous theoretical analysis under standard assumptions. The experimental results on a variety of real-world datasets not only confirm the theoretical result of our new D2SG algorithm, but also show that D2SG has better efficiency and scalability than the existing online pairwise learning algorithms.

摘要

成对学习是一个重要的机器学习主题,有许多实际应用。在线算法是处理流数据的首选,并且在处理大规模成对学习问题时更受青睐。然而,现有的在线成对学习算法对于大规模高维数据的扩展性和效率不够高,因为它们是基于单随机梯度设计的。为了解决这个具有挑战性的问题,在本文中,我们提出了一种用于在线成对学习的动态双随机梯度算法(D2SG)。特别地,纳入一个新样本仅需要 的时间和空间复杂度,其中 是数据的维度。这意味着我们的D2SG比现有的在线成对学习算法更快且扩展性更强,同时在标准假设下通过我们严格的理论分析可以保证统计准确性。在各种真实世界数据集上的实验结果不仅证实了我们新的D2SG算法的理论结果,还表明D2SG比现有的在线成对学习算法具有更好的效率和扩展性。

相似文献

1
New Scalable and Efficient Online Pairwise Learning Algorithm.新型可扩展高效在线成对学习算法
IEEE Trans Neural Netw Learn Syst. 2024 Dec;35(12):17099-17110. doi: 10.1109/TNNLS.2023.3299756. Epub 2024 Dec 2.
2
Large-Scale Nonlinear AUC Maximization via Triply Stochastic Gradients.通过三重随机梯度实现大规模非线性AUC最大化
IEEE Trans Pattern Anal Mach Intell. 2022 Mar;44(3):1385-1398. doi: 10.1109/TPAMI.2020.3024987. Epub 2022 Feb 3.
3
Learning Rates for Nonconvex Pairwise Learning.非凸对学习的学习率。
IEEE Trans Pattern Anal Mach Intell. 2023 Aug;45(8):9996-10011. doi: 10.1109/TPAMI.2023.3259324. Epub 2023 Jun 30.
4
Online nonnegative matrix factorization with robust stochastic approximation.在线非负矩阵分解的鲁棒随机逼近。
IEEE Trans Neural Netw Learn Syst. 2012 Jul;23(7):1087-99. doi: 10.1109/TNNLS.2012.2197827.
5
Scalable Kernel Ordinal Regression via Doubly Stochastic Gradients.通过双重随机梯度实现可扩展内核序数回归
IEEE Trans Neural Netw Learn Syst. 2021 Aug;32(8):3677-3689. doi: 10.1109/TNNLS.2020.3015937. Epub 2021 Aug 3.
6
Low-rank robust online distance/similarity learning based on the rescaled hinge loss.基于重新缩放的铰链损失的低秩鲁棒在线距离/相似度学习
Appl Intell (Dordr). 2023;53(1):634-657. doi: 10.1007/s10489-022-03419-1. Epub 2022 Apr 20.
7
Online cross-validation-based ensemble learning.基于在线交叉验证的集成学习。
Stat Med. 2018 Jan 30;37(2):249-260. doi: 10.1002/sim.7320. Epub 2017 May 4.
8
Online feature selection with streaming features.在线流特征的特征选择。
IEEE Trans Pattern Anal Mach Intell. 2013 May;35(5):1178-92. doi: 10.1109/TPAMI.2012.197.
9
Asynchronous Parallel Large-Scale Gaussian Process Regression.异步并行大规模高斯过程回归
IEEE Trans Neural Netw Learn Syst. 2024 Jun;35(6):8683-8694. doi: 10.1109/TNNLS.2022.3200602. Epub 2024 Jun 3.
10
Online Passive-Aggressive Active Learning for Trapezoidal Data Streams.梯形数据流的在线被动-主动主动学习
IEEE Trans Neural Netw Learn Syst. 2023 Oct;34(10):6725-6739. doi: 10.1109/TNNLS.2022.3178880. Epub 2023 Oct 5.