Tao Wei, Wu Gao-Wei, Tao Qing
IEEE Trans Neural Netw Learn Syst. 2022 Mar;33(3):1107-1118. doi: 10.1109/TNNLS.2020.3040325. Epub 2022 Feb 28.
Momentum technique has recently emerged as an effective strategy in accelerating convergence of gradient descent (GD) methods and exhibits improved performance in deep learning as well as regularized learning. Typical momentum examples include Nesterov's accelerated gradient (NAG) and heavy-ball (HB) methods. However, so far, almost all the acceleration analyses are only limited to NAG, and a few investigations about the acceleration of HB are reported. In this article, we address the convergence about the last iterate of HB in nonsmooth optimizations with constraints, which we name individual convergence. This question is significant in machine learning, where the constraints are required to impose on the learning structure and the individual output is needed to effectively guarantee this structure while keeping an optimal rate of convergence. Specifically, we prove that HB achieves an individual convergence rate of O(1/√t) , where t is the number of iterations. This indicates that both of the two momentum methods can accelerate the individual convergence of basic GD to be optimal. Even for the convergence of averaged iterates, our result avoids the disadvantages of the previous work in restricting the optimization problem to be unconstrained as well as limiting the performed number of iterations to be predefined. The novelty of convergence analysis presented in this article provides a clear understanding of how the HB momentum can accelerate the individual convergence and reveals more insights about the similarities and differences in getting the averaging and individual convergence rates. The derived optimal individual convergence is extended to regularized and stochastic settings, in which an individual solution can be produced by the projection-based operation. In contrast to the averaged output, the sparsity can be reduced remarkably without sacrificing the theoretical optimal rates. Several real experiments demonstrate the performance of HB momentum strategy.
动量技术最近已成为加速梯度下降(GD)方法收敛的有效策略,并且在深度学习以及正则化学习中表现出更好的性能。典型的动量示例包括Nesterov加速梯度(NAG)和重球(HB)方法。然而,到目前为止,几乎所有的加速分析都仅局限于NAG,关于HB加速的研究报道较少。在本文中,我们研究了带约束的非光滑优化中HB最后一次迭代的收敛性,我们将其称为个体收敛。这个问题在机器学习中很重要,其中需要对学习结构施加约束,并且需要个体输出以在保持最优收敛速率的同时有效地保证这种结构。具体而言,我们证明HB实现了O(1/√t)的个体收敛速率,其中t是迭代次数。这表明这两种动量方法都可以将基本GD的个体收敛加速到最优。即使对于平均迭代的收敛,我们的结果也避免了先前工作的缺点,即把优化问题限制为无约束以及将执行的迭代次数限制为预定义值。本文提出的收敛性分析的新颖之处在于清楚地理解了HB动量如何加速个体收敛,并揭示了在获得平均收敛速率和个体收敛速率方面的异同的更多见解。导出的最优个体收敛扩展到正则化和随机设置,其中可以通过基于投影的操作产生个体解。与平均输出相比,可以在不牺牲理论最优速率的情况下显著降低稀疏性。几个实际实验证明了HB动量策略的性能。