• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

用于非光滑有限和优化的带随机重排的分布式随机近端算法

Distributed Stochastic Proximal Algorithm With Random Reshuffling for Nonsmooth Finite-Sum Optimization.

作者信息

Jiang Xia, Zeng Xianlin, Sun Jian, Chen Jie, Xie Lihua

出版信息

IEEE Trans Neural Netw Learn Syst. 2024 Mar;35(3):4082-4096. doi: 10.1109/TNNLS.2022.3201711. Epub 2024 Feb 29.

DOI:10.1109/TNNLS.2022.3201711
PMID:36070265
Abstract

The nonsmooth finite-sum minimization is a fundamental problem in machine learning. This article develops a distributed stochastic proximal-gradient algorithm with random reshuffling to solve the finite-sum minimization over time-varying multiagent networks. The objective function is a sum of differentiable convex functions and nonsmooth regularization. Each agent in the network updates local variables by local information exchange and cooperates to seek an optimal solution. We prove that local variable estimates generated by the proposed algorithm achieve consensus and are attracted to a neighborhood of the optimal solution with an O((1/T)+(1/√T)) convergence rate, where T is the total number of iterations. Finally, some comparative simulations are provided to verify the convergence performance of the proposed algorithm.

摘要

非光滑有限和最小化是机器学习中的一个基本问题。本文提出了一种带随机重排的分布式随机近端梯度算法,用于解决时变多智能体网络上的有限和最小化问题。目标函数是可微凸函数与非光滑正则化项的和。网络中的每个智能体通过局部信息交换来更新局部变量,并协同寻找最优解。我们证明,所提算法生成的局部变量估计值能达成共识,且以(O((1/T)+(1/\sqrt{T})))的收敛速率被吸引到最优解的邻域,其中(T)是迭代总次数。最后,通过一些对比仿真验证了所提算法的收敛性能。

相似文献

1
Distributed Stochastic Proximal Algorithm With Random Reshuffling for Nonsmooth Finite-Sum Optimization.用于非光滑有限和优化的带随机重排的分布式随机近端算法
IEEE Trans Neural Netw Learn Syst. 2024 Mar;35(3):4082-4096. doi: 10.1109/TNNLS.2022.3201711. Epub 2024 Feb 29.
2
Distributed Stochastic Gradient Tracking Algorithm With Variance Reduction for Non-Convex Optimization.用于非凸优化的具有方差缩减的分布式随机梯度跟踪算法
IEEE Trans Neural Netw Learn Syst. 2023 Sep;34(9):5310-5321. doi: 10.1109/TNNLS.2022.3170944. Epub 2023 Sep 1.
3
Projected Primal-Dual Dynamics for Distributed Constrained Nonsmooth Convex Optimization.分布式约束非光滑凸优化的投影原始对偶动力学
IEEE Trans Cybern. 2020 Apr;50(4):1776-1782. doi: 10.1109/TCYB.2018.2883095. Epub 2018 Dec 10.
4
Distributed Stochastic Constrained Composite Optimization Over Time-Varying Network With a Class of Communication Noise.具有一类通信噪声的时变网络上的分布式随机约束复合优化
IEEE Trans Cybern. 2023 Jun;53(6):3561-3573. doi: 10.1109/TCYB.2021.3127278. Epub 2023 May 17.
5
Stochastic proximal gradient methods for nonconvex problems in Hilbert spaces.希尔伯特空间中非凸问题的随机近端梯度方法。
Comput Optim Appl. 2021;78(3):705-740. doi: 10.1007/s10589-020-00259-y. Epub 2021 Jan 12.
6
Distributed Continuous-Time Algorithms for Resource Allocation Problems Over Weight-Balanced Digraphs.基于权平衡有向图的资源分配问题的分布式连续时间算法。
IEEE Trans Cybern. 2018 Nov;48(11):3116-3125. doi: 10.1109/TCYB.2017.2759141. Epub 2017 Oct 17.
7
The Strength of Nesterov's Extrapolation in the Individual Convergence of Nonsmooth Optimization.涅斯捷罗夫外推法在非光滑优化个体收敛中的强度
IEEE Trans Neural Netw Learn Syst. 2020 Jul;31(7):2557-2568. doi: 10.1109/TNNLS.2019.2933452. Epub 2019 Sep 2.
8
Momentum Acceleration in the Individual Convergence of Nonsmooth Convex Optimization With Constraints.带约束非光滑凸优化的个体收敛中的动量加速
IEEE Trans Neural Netw Learn Syst. 2022 Mar;33(3):1107-1118. doi: 10.1109/TNNLS.2020.3040325. Epub 2022 Feb 28.
9
A Minibatch Proximal Stochastic Recursive Gradient Algorithm Using a Trust-Region-Like Scheme and Barzilai-Borwein Stepsizes.一种使用类信赖域方案和巴齐莱-博温步长的小批量近端随机递归梯度算法。
IEEE Trans Neural Netw Learn Syst. 2021 Oct;32(10):4627-4638. doi: 10.1109/TNNLS.2020.3025383. Epub 2021 Oct 5.
10
Stochastic Strongly Convex Optimization via Distributed Epoch Stochastic Gradient Algorithm.通过分布式轮次随机梯度算法实现的随机强凸优化
IEEE Trans Neural Netw Learn Syst. 2021 Jun;32(6):2344-2357. doi: 10.1109/TNNLS.2020.3004723. Epub 2021 Jun 2.