• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

UAdam:用于非凸优化的统一Adam型算法框架。

UAdam: Unified Adam-Type Algorithmic Framework for Nonconvex Optimization.

作者信息

Jiang Yiming, Liu Jinlan, Xu Dongpo, Mandic Danilo P

机构信息

Key Laboratory for Applied Statistics of MOE, School of Mathematics and Statistics, Northeast Normal University, Changchun 130024, China

Department of Mathematics, Changchun Normal University, Changchun 130032, China

出版信息

Neural Comput. 2024 Aug 19;36(9):1912-1938. doi: 10.1162/neco_a_01692.

DOI:10.1162/neco_a_01692
PMID:39106463
Abstract

Adam-type algorithms have become a preferred choice for optimization in the deep learning setting; however, despite their success, their convergence is still not well understood. To this end, we introduce a unified framework for Adam-type algorithms, termed UAdam. It is equipped with a general form of the second-order moment, which makes it possible to include Adam and its existing and future variants as special cases, such as NAdam, AMSGrad, AdaBound, AdaFom, and Adan. The approach is supported by a rigorous convergence analysis of UAdam in the general nonconvex stochastic setting, showing that UAdam converges to the neighborhood of stationary points with a rate of O(1/T). Furthermore, the size of the neighborhood decreases as the parameter β1 increases. Importantly, our analysis only requires the first-order momentum factor to be close enough to 1, without any restrictions on the second-order momentum factor. Theoretical results also reveal the convergence conditions of vanilla Adam, together with the selection of appropriate hyperparameters. This provides a theoretical guarantee for the analysis, applications, and further developments of the whole general class of Adam-type algorithms. Finally, several numerical experiments are provided to support our theoretical findings.

摘要

亚当型算法已成为深度学习优化中的首选;然而,尽管它们取得了成功,但其收敛性仍未得到很好的理解。为此,我们引入了一个用于亚当型算法的统一框架,称为UAdam。它配备了二阶矩的一般形式,这使得将亚当及其现有和未来的变体作为特殊情况包括在内成为可能,例如NAdam、AMSGrad、AdaBound、AdaFom和Adan。该方法得到了UAdam在一般非凸随机环境下严格收敛分析的支持,表明UAdam以O(1/T)的速率收敛到驻点邻域。此外,邻域的大小随着参数β1的增加而减小。重要的是,我们的分析只要求一阶动量因子足够接近1,而对二阶动量因子没有任何限制。理论结果还揭示了原始亚当的收敛条件以及合适超参数的选择。这为整个亚当型算法的分析、应用和进一步发展提供了理论保证。最后,提供了几个数值实验来支持我们的理论发现。

相似文献

1
UAdam: Unified Adam-Type Algorithmic Framework for Nonconvex Optimization.UAdam:用于非凸优化的统一Adam型算法框架。
Neural Comput. 2024 Aug 19;36(9):1912-1938. doi: 10.1162/neco_a_01692.
2
Convergence analysis of AdaBound with relaxed bound functions for non-convex optimization.AdaBound 与松弛边界函数在非凸优化中的收敛性分析。
Neural Netw. 2022 Jan;145:300-307. doi: 10.1016/j.neunet.2021.10.026. Epub 2021 Nov 8.
3
A Unified Analysis of AdaGrad With Weighted Aggregation and Momentum Acceleration.结合加权聚合与动量加速的AdaGrad统一分析
IEEE Trans Neural Netw Learn Syst. 2024 Oct;35(10):14482-14490. doi: 10.1109/TNNLS.2023.3279381. Epub 2024 Oct 7.
4
Appropriate Learning Rates of Adaptive Learning Rate Optimization Algorithms for Training Deep Neural Networks.适用于训练深度神经网络的自适应学习率优化算法的适当学习率。
IEEE Trans Cybern. 2022 Dec;52(12):13250-13261. doi: 10.1109/TCYB.2021.3107415. Epub 2022 Nov 18.
5
AdaCN: An Adaptive Cubic Newton Method for Nonconvex Stochastic Optimization.AdaCN:一种用于非凸随机优化的自适应三次牛顿方法。
Comput Intell Neurosci. 2021 Nov 10;2021:5790608. doi: 10.1155/2021/5790608. eCollection 2021.
6
Towards Understanding Convergence and Generalization of AdamW.迈向理解AdamW的收敛性与泛化能力
IEEE Trans Pattern Anal Mach Intell. 2024 Sep;46(9):6486-6493. doi: 10.1109/TPAMI.2024.3382294. Epub 2024 Aug 6.
7
Convergence of the RMSProp deep learning method with penalty for nonconvex optimization.RMSProp 深度学习方法与非凸优化惩罚项的收敛性。
Neural Netw. 2021 Jul;139:17-23. doi: 10.1016/j.neunet.2021.02.011. Epub 2021 Feb 23.
8
Calibrating the Adaptive Learning Rate to Improve Convergence of ADAM.校准自适应学习率以提高ADAM算法的收敛性。
Neurocomputing (Amst). 2022 Apr 7;481:333-356. doi: 10.1016/j.neucom.2022.01.014. Epub 2022 Jan 21.
9
ϵ-Approximation of Adaptive Leaning Rate Optimization Algorithms for Constrained Nonconvex Stochastic Optimization.用于约束非凸随机优化的自适应学习率优化算法的ϵ近似
IEEE Trans Neural Netw Learn Syst. 2023 Oct;34(10):8108-8115. doi: 10.1109/TNNLS.2022.3142726. Epub 2023 Oct 6.
10
A novel adaptive cubic quasi-Newton optimizer for deep learning based medical image analysis tasks, validated on detection of COVID-19 and segmentation for COVID-19 lung infection, liver tumor, and optic disc/cup.一种用于深度学习的新型自适应三次拟牛顿优化器,在 COVID-19 检测和 COVID-19 肺部感染、肝脏肿瘤以及视盘/杯分割等医学图像分析任务中得到验证。
Med Phys. 2023 Mar;50(3):1528-1538. doi: 10.1002/mp.15969. Epub 2022 Oct 6.