Pu Shi, Olshevsky Alex, Paschalidis Ioannis Ch
School of Data Science, Shenzhen Research Institute of Big Data, The Chinese University of Hong Kong, Shenzhen, China.
Department of Electrical and Computer Engineering and the Division of Systems Engineering, Boston University, Boston, MA.
IEEE Trans Automat Contr. 2022 Nov;67(11):5900-5915. doi: 10.1109/tac.2021.3126253. Epub 2021 Nov 9.
This paper is concerned with minimizing the average of cost functions over a network in which agents may communicate and exchange information with each other. We consider the setting where only noisy gradient information is available. To solve the problem, we study the distributed stochastic gradient descent (DSGD) method and perform a non-asymptotic convergence analysis. For strongly convex and smooth objective functions, in expectation, DSGD asymptotically achieves the optimal network independent convergence rate compared to centralized stochastic gradient descent (SGD). Our main contribution is to characterize the transient time needed for DSGD to approach the asymptotic convergence rate. Moreover, we construct a "hard" optimization problem that proves the sharpness of the obtained result. Numerical experiments demonstrate the tightness of the theoretical results.
本文关注的是在一个网络中最小化成本函数的平均值,其中智能体可以相互通信和交换信息。我们考虑只有噪声梯度信息可用的情况。为了解决这个问题,我们研究了分布式随机梯度下降(DSGD)方法并进行了非渐近收敛分析。对于强凸且光滑的目标函数,从期望上来说,与集中式随机梯度下降(SGD)相比,DSGD渐近地实现了最优的与网络无关的收敛速率。我们的主要贡献是刻画了DSGD达到渐近收敛速率所需的瞬态时间。此外,我们构造了一个“困难”的优化问题,证明了所得结果的尖锐性。数值实验证明了理论结果的紧密性。