Suppr超能文献

具有自动学习率调整功能的增量式和并行式机器学习算法

Incremental and Parallel Machine Learning Algorithms With Automated Learning Rate Adjustments.

作者信息

Hishinuma Kazuhiro, Iiduka Hideaki

机构信息

Computer Science Program, Graduate School of Science and Technology, Meiji University, Kawasaki, Japan.

Department of Computer Science, Meiji University, Kawasaki, Japan.

出版信息

Front Robot AI. 2019 Aug 27;6:77. doi: 10.3389/frobt.2019.00077. eCollection 2019.

Abstract

The existing machine learning algorithms for minimizing the convex function over a closed convex set suffer from slow convergence because their learning rates must be determined before running them. This paper proposes two machine learning algorithms incorporating the line search method, which automatically and algorithmically finds appropriate learning rates at run-time. One algorithm is based on the incremental subgradient algorithm, which sequentially and cyclically uses each of the parts of the objective function; the other is based on the parallel subgradient algorithm, which uses parts independently in parallel. These algorithms can be applied to constrained nonsmooth convex optimization problems appearing in tasks of learning support vector machines without adjusting the learning rates precisely. The proposed line search method can determine learning rates to satisfy weaker conditions than the ones used in the existing machine learning algorithms. This implies that the two algorithms are generalizations of the existing incremental and parallel subgradient algorithms for solving constrained nonsmooth convex optimization problems. We show that they generate sequences that converge to a solution of the constrained nonsmooth convex optimization problem under certain conditions. The main contribution of this paper is the provision of three kinds of experiment showing that the two algorithms can solve concrete experimental problems faster than the existing algorithms. First, we show that the proposed algorithms have performance advantages over the existing ones in solving a test problem. Second, we compare the proposed algorithms with a different algorithm Pegasos, which is designed to learn with a support vector machine efficiently, in terms of prediction accuracy, value of the objective function, and computational time. Finally, we use one of our algorithms to train a multilayer neural network and discuss its applicability to deep learning.

摘要

用于在闭凸集上最小化凸函数的现有机器学习算法存在收敛速度慢的问题,因为它们的学习率必须在运行之前确定。本文提出了两种结合线搜索方法的机器学习算法,该方法在运行时自动并通过算法找到合适的学习率。一种算法基于增量次梯度算法,它按顺序循环使用目标函数的每个部分;另一种基于并行次梯度算法,它并行独立地使用各个部分。这些算法可以应用于学习支持向量机任务中出现的约束非光滑凸优化问题,而无需精确调整学习率。所提出的线搜索方法可以确定满足比现有机器学习算法中使用的条件更弱的条件的学习率。这意味着这两种算法是用于解决约束非光滑凸优化问题的现有增量和并行次梯度算法的推广。我们表明,在某些条件下,它们生成的序列收敛到约束非光滑凸优化问题的解。本文的主要贡献是提供了三种实验,表明这两种算法比现有算法能更快地解决具体的实验问题。首先,我们表明所提出的算法在解决测试问题方面比现有算法具有性能优势。其次,我们将所提出的算法与另一种不同的算法Pegasos进行比较,Pegasos旨在有效地用支持向量机进行学习,比较内容包括预测准确性、目标函数值和计算时间。最后,我们使用我们的一种算法训练多层神经网络,并讨论其在深度学习中的适用性。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7ee1/7805887/33a46fd0042a/frobt-06-00077-g0001.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验