Suppr超能文献

关于分布式随机梯度下降瞬态时间的精确估计

A Sharp Estimate on the Transient Time of Distributed Stochastic Gradient Descent.

作者信息

Pu Shi, Olshevsky Alex, Paschalidis Ioannis Ch

机构信息

School of Data Science, Shenzhen Research Institute of Big Data, The Chinese University of Hong Kong, Shenzhen, China.

Department of Electrical and Computer Engineering and the Division of Systems Engineering, Boston University, Boston, MA.

出版信息

IEEE Trans Automat Contr. 2022 Nov;67(11):5900-5915. doi: 10.1109/tac.2021.3126253. Epub 2021 Nov 9.

Abstract

This paper is concerned with minimizing the average of cost functions over a network in which agents may communicate and exchange information with each other. We consider the setting where only noisy gradient information is available. To solve the problem, we study the distributed stochastic gradient descent (DSGD) method and perform a non-asymptotic convergence analysis. For strongly convex and smooth objective functions, in expectation, DSGD asymptotically achieves the optimal network independent convergence rate compared to centralized stochastic gradient descent (SGD). Our main contribution is to characterize the transient time needed for DSGD to approach the asymptotic convergence rate. Moreover, we construct a "hard" optimization problem that proves the sharpness of the obtained result. Numerical experiments demonstrate the tightness of the theoretical results.

摘要

本文关注的是在一个网络中最小化成本函数的平均值,其中智能体可以相互通信和交换信息。我们考虑只有噪声梯度信息可用的情况。为了解决这个问题,我们研究了分布式随机梯度下降(DSGD)方法并进行了非渐近收敛分析。对于强凸且光滑的目标函数,从期望上来说,与集中式随机梯度下降(SGD)相比,DSGD渐近地实现了最优的与网络无关的收敛速率。我们的主要贡献是刻画了DSGD达到渐近收敛速率所需的瞬态时间。此外,我们构造了一个“困难”的优化问题,证明了所得结果的尖锐性。数值实验证明了理论结果的紧密性。

相似文献

1
A Sharp Estimate on the Transient Time of Distributed Stochastic Gradient Descent.
IEEE Trans Automat Contr. 2022 Nov;67(11):5900-5915. doi: 10.1109/tac.2021.3126253. Epub 2021 Nov 9.
2
Asymptotic Network Independence in Distributed Stochastic Optimization for Machine Learning.
IEEE Signal Process Mag. 2020 May;37(3):114-122. doi: 10.1109/msp.2020.2975212. Epub 2020 May 6.
4
Decentralized stochastic sharpness-aware minimization algorithm.
Neural Netw. 2024 Aug;176:106325. doi: 10.1016/j.neunet.2024.106325. Epub 2024 Apr 17.
5
Distributed Stochastic Constrained Composite Optimization Over Time-Varying Network With a Class of Communication Noise.
IEEE Trans Cybern. 2023 Jun;53(6):3561-3573. doi: 10.1109/TCYB.2021.3127278. Epub 2023 May 17.
6
Communication-Censored Distributed Stochastic Gradient Descent.
IEEE Trans Neural Netw Learn Syst. 2022 Nov;33(11):6831-6843. doi: 10.1109/TNNLS.2021.3083655. Epub 2022 Oct 27.
7
Dualityfree Methods for Stochastic Composition Optimization.
IEEE Trans Neural Netw Learn Syst. 2019 Apr;30(4):1205-1217. doi: 10.1109/TNNLS.2018.2866699. Epub 2018 Sep 12.
8
Preconditioned Stochastic Gradient Descent.
IEEE Trans Neural Netw Learn Syst. 2018 May;29(5):1454-1466. doi: 10.1109/TNNLS.2017.2672978. Epub 2017 Mar 9.
9
The Strength of Nesterov's Extrapolation in the Individual Convergence of Nonsmooth Optimization.
IEEE Trans Neural Netw Learn Syst. 2020 Jul;31(7):2557-2568. doi: 10.1109/TNNLS.2019.2933452. Epub 2019 Sep 2.
10
Stochastic Gradient Descent for Nonconvex Learning Without Bounded Gradient Assumptions.
IEEE Trans Neural Netw Learn Syst. 2020 Oct;31(10):4394-4400. doi: 10.1109/TNNLS.2019.2952219. Epub 2019 Dec 11.

本文引用的文献

1
Asymptotic Network Independence in Distributed Stochastic Optimization for Machine Learning.
IEEE Signal Process Mag. 2020 May;37(3):114-122. doi: 10.1109/msp.2020.2975212. Epub 2020 May 6.
3
Federated learning of predictive models from federated Electronic Health Records.
Int J Med Inform. 2018 Apr;112:59-67. doi: 10.1016/j.ijmedinf.2018.01.007. Epub 2018 Jan 12.

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验