IEEE Trans Neural Netw Learn Syst. 2018 Oct;29(10):5008-5019. doi: 10.1109/TNNLS.2017.2764960. Epub 2018 Jan 17.
Online learning has been successfully applied in various machine learning problems. Conventional analysis of online learning achieves a sharp generalization bound with a strongly convex assumption. In this paper, we study the generalization ability of the classic online gradient descent algorithm under the quadratic growth condition (QGC), a strictly weaker condition than strong convexity. Under some mild assumptions, we prove that the excess risk converges no worse than $O(\log T/T)$ when the data are independently and identically distributed (i.i.d.). When the data are generated from a $\phi $ -mixing process, we achieve the excess risk bound $O(\log T /T+\phi (\tau))$ , where $\phi (\tau)$ is the mixing coefficient capturing the non-i.i.d. attribute. Our key technique is based on the combination of the QGC and the martingale concentrations. Our results indicate that the strong convexity is not necessary to achieve the sharp $O(\log {T}/T)$ convergence rate in online learning. We verify our theories on both synthetic and real-world data.
在线学习已成功应用于各种机器学习问题。传统的在线学习分析在强凸假设下达到了锐利的泛化界。在本文中,我们研究了经典在线梯度下降算法在二次增长条件(QGC)下的泛化能力,这是比强凸性严格弱的条件。在一些温和的假设下,我们证明了当数据独立同分布(iid)时,当数据由 $\phi $ -混合过程生成时,我们实现了 excess risk bound $O(\log T /T+\phi (\tau))$ ,其中 $\phi (\tau)$ 是捕获非iid 属性的混合系数。我们的关键技术基于 QGC 和鞅浓度的组合。我们的结果表明,强凸性不是在线学习中达到尖锐的 $O(\log {T}/T)$ 收敛速度所必需的。我们在合成和真实世界数据上验证了我们的理论。