• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

鲁棒异步随机梯度推送:强凸函数的渐近最优和与网络无关的性能

Robust Asynchronous Stochastic Gradient-Push: Asymptotically Optimal and Network-Independent Performance for Strongly Convex Functions.

作者信息

Spiridonoff Artin, Olshevsky Alex, Paschalidis Ioannis Ch

机构信息

Division of Systems Engineering, Boston University, Boston, MA 02215, USA.

出版信息

J Mach Learn Res. 2020;21.

PMID:32989377
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC7520166/
Abstract

We consider the standard model of distributed optimization of a sum of functions , where node in a network holds the function (). We allow for a harsh network model characterized by asynchronous updates, message delays, unpredictable message losses, and directed communication among nodes. In this setting, we analyze a modification of the Gradient-Push method for distributed optimization, assuming that (i) node is capable of generating gradients of its function () corrupted by zero-mean bounded-support additive noise at each step, (ii) () is strongly convex, and (iii) each () has Lipschitz gradients. We show that our proposed method asymptotically performs as well as the best bounds on centralized gradient descent that takes steps in the direction of the sum of the noisy gradients of all the functions (), …, () at each step.

摘要

我们考虑函数之和的分布式优化标准模型,其中网络中的节点 (i) 持有函数 (f_i())。我们允许一种严苛的网络模型,其特征为异步更新、消息延迟、不可预测的消息丢失以及节点间的定向通信。在此设定下,我们分析用于分布式优化的梯度推送方法的一种变体,假设:(i) 节点 (i) 能够在每一步生成其函数 (f_i()) 的梯度,该梯度被零均值有界支撑加性噪声所干扰;(ii) (f_i()) 是强凸函数;(iii) 每个 (f_i()) 具有Lipschitz梯度。我们表明,我们提出的方法在渐近意义上与集中式梯度下降的最佳界表现相同,集中式梯度下降在每一步朝着所有函数 (f_1()),…,(f_n()) 的噪声梯度之和的方向进行步长更新。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/43cf/7520166/13b9a078ac24/nihms-1608067-f0007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/43cf/7520166/69e61f4f90c2/nihms-1608067-f0001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/43cf/7520166/e6f2459b8c13/nihms-1608067-f0002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/43cf/7520166/dbab8e61890d/nihms-1608067-f0003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/43cf/7520166/f52b96e1e47a/nihms-1608067-f0004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/43cf/7520166/09a4dd749a1a/nihms-1608067-f0005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/43cf/7520166/4feedcf12dc2/nihms-1608067-f0006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/43cf/7520166/13b9a078ac24/nihms-1608067-f0007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/43cf/7520166/69e61f4f90c2/nihms-1608067-f0001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/43cf/7520166/e6f2459b8c13/nihms-1608067-f0002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/43cf/7520166/dbab8e61890d/nihms-1608067-f0003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/43cf/7520166/f52b96e1e47a/nihms-1608067-f0004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/43cf/7520166/09a4dd749a1a/nihms-1608067-f0005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/43cf/7520166/4feedcf12dc2/nihms-1608067-f0006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/43cf/7520166/13b9a078ac24/nihms-1608067-f0007.jpg

相似文献

1
Robust Asynchronous Stochastic Gradient-Push: Asymptotically Optimal and Network-Independent Performance for Strongly Convex Functions.鲁棒异步随机梯度推送:强凸函数的渐近最优和与网络无关的性能
J Mach Learn Res. 2020;21.
2
A Sharp Estimate on the Transient Time of Distributed Stochastic Gradient Descent.关于分布式随机梯度下降瞬态时间的精确估计
IEEE Trans Automat Contr. 2022 Nov;67(11):5900-5915. doi: 10.1109/tac.2021.3126253. Epub 2021 Nov 9.
3
Push-Sum Distributed Online Optimization With Bandit Feedback.具有博弈反馈的推和分布式在线优化
IEEE Trans Cybern. 2022 Apr;52(4):2263-2273. doi: 10.1109/TCYB.2020.2999309. Epub 2022 Apr 5.
4
Hybrid-DCA: A double asynchronous approach for stochastic dual coordinate ascent.混合对偶坐标上升算法(Hybrid-DCA):一种用于随机对偶坐标上升的双异步方法。
J Parallel Distrib Comput. 2020 Sep;143:47-66. doi: 10.1016/j.jpdc.2020.04.002. Epub 2020 Apr 13.
5
Distributed Stochastic Constrained Composite Optimization Over Time-Varying Network With a Class of Communication Noise.具有一类通信噪声的时变网络上的分布式随机约束复合优化
IEEE Trans Cybern. 2023 Jun;53(6):3561-3573. doi: 10.1109/TCYB.2021.3127278. Epub 2023 May 17.
6
Distributed Nesterov Gradient and Heavy-Ball Double Accelerated Asynchronous Optimization.分布式涅斯捷罗夫梯度与重球双加速异步优化
IEEE Trans Neural Netw Learn Syst. 2021 Dec;32(12):5723-5737. doi: 10.1109/TNNLS.2020.3027381. Epub 2021 Nov 30.
7
Dualityfree Methods for Stochastic Composition Optimization.随机组合优化的无对偶方法
IEEE Trans Neural Netw Learn Syst. 2019 Apr;30(4):1205-1217. doi: 10.1109/TNNLS.2018.2866699. Epub 2018 Sep 12.
8
Distributed Optimization for Two Types of Heterogeneous Multiagent Systems.两类异构多智能体系统的分布式优化
IEEE Trans Neural Netw Learn Syst. 2021 Mar;32(3):1314-1324. doi: 10.1109/TNNLS.2020.2984584. Epub 2021 Mar 1.
9
Stochastic Strongly Convex Optimization via Distributed Epoch Stochastic Gradient Algorithm.通过分布式轮次随机梯度算法实现的随机强凸优化
IEEE Trans Neural Netw Learn Syst. 2021 Jun;32(6):2344-2357. doi: 10.1109/TNNLS.2020.3004723. Epub 2021 Jun 2.
10
Asymptotic Network Independence in Distributed Stochastic Optimization for Machine Learning.机器学习分布式随机优化中的渐近网络独立性
IEEE Signal Process Mag. 2020 May;37(3):114-122. doi: 10.1109/msp.2020.2975212. Epub 2020 May 6.

引用本文的文献

1
A Sharp Estimate on the Transient Time of Distributed Stochastic Gradient Descent.关于分布式随机梯度下降瞬态时间的精确估计
IEEE Trans Automat Contr. 2022 Nov;67(11):5900-5915. doi: 10.1109/tac.2021.3126253. Epub 2021 Nov 9.
2
Asymptotic Network Independence in Distributed Stochastic Optimization for Machine Learning.机器学习分布式随机优化中的渐近网络独立性
IEEE Signal Process Mag. 2020 May;37(3):114-122. doi: 10.1109/msp.2020.2975212. Epub 2020 May 6.

本文引用的文献

1
Federated learning of predictive models from federated Electronic Health Records.从联邦电子健康记录中联合学习预测模型。
Int J Med Inform. 2018 Apr;112:59-67. doi: 10.1016/j.ijmedinf.2018.01.007. Epub 2018 Jan 12.