• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

具有自动学习率调整功能的增量式和并行式机器学习算法

Incremental and Parallel Machine Learning Algorithms With Automated Learning Rate Adjustments.

作者信息

Hishinuma Kazuhiro, Iiduka Hideaki

机构信息

Computer Science Program, Graduate School of Science and Technology, Meiji University, Kawasaki, Japan.

Department of Computer Science, Meiji University, Kawasaki, Japan.

出版信息

Front Robot AI. 2019 Aug 27;6:77. doi: 10.3389/frobt.2019.00077. eCollection 2019.

DOI:10.3389/frobt.2019.00077
PMID:33501092
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC7805887/
Abstract

The existing machine learning algorithms for minimizing the convex function over a closed convex set suffer from slow convergence because their learning rates must be determined before running them. This paper proposes two machine learning algorithms incorporating the line search method, which automatically and algorithmically finds appropriate learning rates at run-time. One algorithm is based on the incremental subgradient algorithm, which sequentially and cyclically uses each of the parts of the objective function; the other is based on the parallel subgradient algorithm, which uses parts independently in parallel. These algorithms can be applied to constrained nonsmooth convex optimization problems appearing in tasks of learning support vector machines without adjusting the learning rates precisely. The proposed line search method can determine learning rates to satisfy weaker conditions than the ones used in the existing machine learning algorithms. This implies that the two algorithms are generalizations of the existing incremental and parallel subgradient algorithms for solving constrained nonsmooth convex optimization problems. We show that they generate sequences that converge to a solution of the constrained nonsmooth convex optimization problem under certain conditions. The main contribution of this paper is the provision of three kinds of experiment showing that the two algorithms can solve concrete experimental problems faster than the existing algorithms. First, we show that the proposed algorithms have performance advantages over the existing ones in solving a test problem. Second, we compare the proposed algorithms with a different algorithm Pegasos, which is designed to learn with a support vector machine efficiently, in terms of prediction accuracy, value of the objective function, and computational time. Finally, we use one of our algorithms to train a multilayer neural network and discuss its applicability to deep learning.

摘要

用于在闭凸集上最小化凸函数的现有机器学习算法存在收敛速度慢的问题,因为它们的学习率必须在运行之前确定。本文提出了两种结合线搜索方法的机器学习算法,该方法在运行时自动并通过算法找到合适的学习率。一种算法基于增量次梯度算法,它按顺序循环使用目标函数的每个部分;另一种基于并行次梯度算法,它并行独立地使用各个部分。这些算法可以应用于学习支持向量机任务中出现的约束非光滑凸优化问题,而无需精确调整学习率。所提出的线搜索方法可以确定满足比现有机器学习算法中使用的条件更弱的条件的学习率。这意味着这两种算法是用于解决约束非光滑凸优化问题的现有增量和并行次梯度算法的推广。我们表明,在某些条件下,它们生成的序列收敛到约束非光滑凸优化问题的解。本文的主要贡献是提供了三种实验,表明这两种算法比现有算法能更快地解决具体的实验问题。首先,我们表明所提出的算法在解决测试问题方面比现有算法具有性能优势。其次,我们将所提出的算法与另一种不同的算法Pegasos进行比较,Pegasos旨在有效地用支持向量机进行学习,比较内容包括预测准确性、目标函数值和计算时间。最后,我们使用我们的一种算法训练多层神经网络,并讨论其在深度学习中的适用性。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7ee1/7805887/67c605c9b5b7/frobt-06-00077-g0005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7ee1/7805887/33a46fd0042a/frobt-06-00077-g0001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7ee1/7805887/c743e65de71d/frobt-06-00077-g0002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7ee1/7805887/6f296f96e62b/frobt-06-00077-g0003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7ee1/7805887/0f27bbb86e9b/frobt-06-00077-g0004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7ee1/7805887/67c605c9b5b7/frobt-06-00077-g0005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7ee1/7805887/33a46fd0042a/frobt-06-00077-g0001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7ee1/7805887/c743e65de71d/frobt-06-00077-g0002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7ee1/7805887/6f296f96e62b/frobt-06-00077-g0003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7ee1/7805887/0f27bbb86e9b/frobt-06-00077-g0004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7ee1/7805887/67c605c9b5b7/frobt-06-00077-g0005.jpg

相似文献

1
Incremental and Parallel Machine Learning Algorithms With Automated Learning Rate Adjustments.具有自动学习率调整功能的增量式和并行式机器学习算法
Front Robot AI. 2019 Aug 27;6:77. doi: 10.3389/frobt.2019.00077. eCollection 2019.
2
A subgradient-based neurodynamic algorithm to constrained nonsmooth nonconvex interval-valued optimization.基于次梯度的神经动力学算法求解约束非光滑非凸区间值优化问题。
Neural Netw. 2023 Mar;160:259-273. doi: 10.1016/j.neunet.2023.01.012. Epub 2023 Jan 20.
3
Subgradient-based neural networks for nonsmooth nonconvex optimization problems.用于非光滑非凸优化问题的基于次梯度的神经网络。
IEEE Trans Neural Netw. 2009 Jun;20(6):1024-38. doi: 10.1109/TNN.2009.2016340. Epub 2009 May 19.
4
Subgradient ellipsoid method for nonsmooth convex problems.非光滑凸问题的次梯度椭球法
Math Program. 2023;199(1-2):305-341. doi: 10.1007/s10107-022-01833-4. Epub 2022 Jun 14.
5
The Strength of Nesterov's Extrapolation in the Individual Convergence of Nonsmooth Optimization.涅斯捷罗夫外推法在非光滑优化个体收敛中的强度
IEEE Trans Neural Netw Learn Syst. 2020 Jul;31(7):2557-2568. doi: 10.1109/TNNLS.2019.2933452. Epub 2019 Sep 2.
6
An incremental mirror descent subgradient algorithm with random sweeping and proximal step.一种具有随机扫描和近端步长的增量镜像下降次梯度算法。
Optimization. 2018 Jun 14;68(1):33-50. doi: 10.1080/02331934.2018.1482491. eCollection 2019.
7
Neural network for constrained nonsmooth optimization using Tikhonov regularization.基于 Tikhonov 正则化的约束非光滑优化神经网络。
Neural Netw. 2015 Mar;63:272-81. doi: 10.1016/j.neunet.2014.12.007. Epub 2014 Dec 31.
8
Nonsmooth Optimization-Based Model and Algorithm for Semisupervised Clustering.基于非光滑优化的半监督聚类模型与算法
IEEE Trans Neural Netw Learn Syst. 2023 Sep;34(9):5517-5530. doi: 10.1109/TNNLS.2021.3129370. Epub 2023 Sep 1.
9
A neurodynamic approach to convex optimization problems with general constraint.具有一般约束的凸优化问题的神经动力学方法
Neural Netw. 2016 Dec;84:113-124. doi: 10.1016/j.neunet.2016.08.014. Epub 2016 Sep 9.
10
An accelerated proximal gradient algorithm for singly linearly constrained quadratic programs with box constraints.一种用于具有盒约束的单线性约束二次规划的加速近端梯度算法。
ScientificWorldJournal. 2013 Oct 7;2013:246596. doi: 10.1155/2013/246596. eCollection 2013.