Suppr超能文献

在线复合镜像下降算法分析

Analysis of Online Composite Mirror Descent Algorithm.

作者信息

Lei Yunwen, Zhou Ding-Xuan

机构信息

Department of Mathematics, City University of Hong Kong, Kowloon, Hong Kong, China

出版信息

Neural Comput. 2017 Mar;29(3):825-860. doi: 10.1162/NECO_a_00930. Epub 2017 Jan 17.

Abstract

We study the convergence of the online composite mirror descent algorithm, which involves a mirror map to reflect the geometry of the data and a convex objective function consisting of a loss and a regularizer possibly inducing sparsity. Our error analysis provides convergence rates in terms of properties of the strongly convex differentiable mirror map and the objective function. For a class of objective functions with Hölder continuous gradients, the convergence rates of the excess (regularized) risk under polynomially decaying step sizes have the order [Formula: see text] after [Formula: see text] iterates. Our results improve the existing error analysis for the online composite mirror descent algorithm by avoiding averaging and removing boundedness assumptions, and they sharpen the existing convergence rates of the last iterate for online gradient descent without any boundedness assumptions. Our methodology mainly depends on a novel error decomposition in terms of an excess Bregman distance, refined analysis of self-bounding properties of the objective function, and the resulting one-step progress bounds.

摘要

我们研究在线复合镜像下降算法的收敛性,该算法涉及一个反映数据几何结构的镜像映射以及一个由损失函数和可能诱导稀疏性的正则化项组成的凸目标函数。我们的误差分析根据强凸可微镜像映射和目标函数的性质给出了收敛速率。对于一类具有赫尔德连续梯度的目标函数,在多项式衰减步长下,经过[公式:见原文]次迭代后,超额(正则化)风险的收敛速率为[公式:见原文]阶。我们的结果通过避免平均化和去除有界性假设改进了在线复合镜像下降算法的现有误差分析,并且在没有任何有界性假设的情况下锐化了在线梯度下降最后一次迭代的现有收敛速率。我们的方法主要依赖于基于超额布雷格曼距离的新颖误差分解、对目标函数自界性质的精细分析以及由此产生的单步进展界。

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验