Suppr超能文献

通过牛顿法的三次正则化来最小化一致凸函数。

Minimizing Uniformly Convex Functions by Cubic Regularization of Newton Method.

作者信息

Doikov Nikita, Nesterov Yurii

机构信息

ICTEAM (Catholic University of Louvain), Louvain-la-Neuve, Belgium.

CORE (Catholic University of Louvain), Louvain-la-Neuve, Belgium.

出版信息

J Optim Theory Appl. 2021;189(1):317-339. doi: 10.1007/s10957-021-01838-7. Epub 2021 Mar 10.

Abstract

In this paper, we study the iteration complexity of cubic regularization of Newton method for solving composite minimization problems with uniformly convex objective. We introduce the notion of second-order condition number of a certain degree and justify the linear rate of convergence in a nondegenerate case for the method with an adaptive estimate of the regularization parameter. The algorithm automatically achieves the best possible global complexity bound among different problem classes of uniformly convex objective functions with Hölder continuous Hessian of the smooth part of the objective. As a byproduct of our developments, we justify an intuitively plausible result that the global iteration complexity of the Newton method is always better than that of the gradient method on the class of strongly convex functions with uniformly bounded second derivative.

摘要

在本文中,我们研究了用于求解具有一致凸目标函数的复合最小化问题的牛顿法三次正则化的迭代复杂度。我们引入了一定程度的二阶条件数的概念,并证明了在正则化参数具有自适应估计的情况下该方法在非退化情形下的线性收敛速率。该算法在目标函数光滑部分的黑塞矩阵为赫尔德连续的一致凸目标函数的不同问题类中自动实现了最优的全局复杂度界。作为我们研究成果的一个副产品,我们证明了一个直观上合理的结果:在二阶导数一致有界的强凸函数类上,牛顿法的全局迭代复杂度总是优于梯度法。

相似文献

2
Gradient regularization of Newton method with Bregman distances.基于布雷格曼距离的牛顿法梯度正则化
Math Program. 2024;204(1-2):1-25. doi: 10.1007/s10107-023-01943-7. Epub 2023 Mar 24.
4
Local convergence of tensor methods.张量方法的局部收敛性。
Math Program. 2022;193(1):315-336. doi: 10.1007/s10107-020-01606-x. Epub 2021 Jan 4.
5
Implementable tensor methods in unconstrained convex optimization.无约束凸优化中可实现的张量方法。
Math Program. 2021;186(1):157-183. doi: 10.1007/s10107-019-01449-1. Epub 2019 Nov 21.
6
Adaptive Restart of the Optimized Gradient Method for Convex Optimization.用于凸优化的优化梯度法的自适应重启
J Optim Theory Appl. 2018 Jul;178(1):240-263. doi: 10.1007/s10957-018-1287-4. Epub 2018 May 7.

引用本文的文献

1
Tensor methods for finding approximate stationary points of convex functions.用于寻找凸函数近似驻点的张量方法。
Optim Methods Softw. 2020 Sep 23;37(2):605-638. doi: 10.1080/10556788.2020.1818082. eCollection 2022.
2
Gradient regularization of Newton method with Bregman distances.基于布雷格曼距离的牛顿法梯度正则化
Math Program. 2024;204(1-2):1-25. doi: 10.1007/s10107-023-01943-7. Epub 2023 Mar 24.
3
Local convergence of tensor methods.张量方法的局部收敛性。
Math Program. 2022;193(1):315-336. doi: 10.1007/s10107-020-01606-x. Epub 2021 Jan 4.
4
Smoothness Parameter of Power of Euclidean Norm.欧几里得范数幂的平滑度参数
J Optim Theory Appl. 2020;185(2):303-326. doi: 10.1007/s10957-020-01653-6. Epub 2020 Mar 27.

本文引用的文献

1
Implementable tensor methods in unconstrained convex optimization.无约束凸优化中可实现的张量方法。
Math Program. 2021;186(1):157-183. doi: 10.1007/s10107-019-01449-1. Epub 2019 Nov 21.

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验