Tao Qing, Sun Zhengya, Kong Kang
Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China.
IEEE Trans Syst Man Cybern B Cybern. 2012 Feb;42(1):140-9. doi: 10.1109/TSMCB.2011.2163506. Epub 2011 Aug 30.
Most of the existing numerical optimization methods are based upon a discretization of some ordinary differential equations. In order to solve some convex and smooth optimization problems coming from machine learning, in this paper, we develop efficient batch and online algorithms based on a new principle, i.e., the optimized discretization of continuous dynamical systems (ODCDSs). First, a batch learning projected gradient dynamical system with Lyapunov's stability and monotonic property is introduced, and its dynamical behavior guarantees the accuracy of discretization-based optimizer and applicability of line search strategy. Furthermore, under fair assumptions, a new online learning algorithm achieving regret O(√T) or O(logT) is obtained. By using the line search strategy, the proposed batch learning ODCDS exhibits insensitivity to the step sizes and faster decrease. With only a small number of line search steps, the proposed stochastic algorithm shows sufficient stability and approximate optimality. Experimental results demonstrate the correctness of our theoretical analysis and efficiency of our algorithms.
现有的大多数数值优化方法都是基于对某些常微分方程的离散化。为了解决一些来自机器学习的凸且光滑的优化问题,在本文中,我们基于一种新原理,即连续动力系统的优化离散化(ODCDS),开发了高效的批处理和在线算法。首先,引入了具有李雅普诺夫稳定性和单调性的批处理学习投影梯度动力系统,其动力学行为保证了基于离散化的优化器的准确性和线搜索策略的适用性。此外,在合理假设下,得到了一种实现遗憾值为O(√T) 或O(logT) 的新在线学习算法。通过使用线搜索策略,所提出的批处理学习ODCDS对步长不敏感且下降更快。仅经过少量的线搜索步骤,所提出的随机算法就显示出足够的稳定性和近似最优性。实验结果证明了我们理论分析的正确性和算法的有效性。