Li Huaqing, Cheng Huqiang, Wang Zheng, Wu Guo-Cheng
IEEE Trans Neural Netw Learn Syst. 2021 Dec;32(12):5723-5737. doi: 10.1109/TNNLS.2020.3027381. Epub 2021 Nov 30.
In this article, we come up with a novel Nesterov gradient and heavy-ball double accelerated distributed synchronous optimization algorithm, called NHDA, and adopt a general asynchronous model to further propose an effective asynchronous algorithm, called ASY-NHDA, for distributed optimization problem over directed graphs, where each agent has access to a local objective function and computes the optimal solution via communicating only with its immediate neighbors. Our goal is to minimize a sum of all local objective functions satisfying strong convexity and Lipschitz continuity. Consider a general asynchronous model, where agents communicate with their immediate neighbors and start a new computation independently, that is, agents can communicate with their neighbors at any time without any coordination and use delayed information from their in-neighbors to compute a new update. Delays are arbitrary, unpredictable, and time-varying but bounded. The theoretical analysis of NHDA is based on analyzing the interaction among the consensus, the gradient tracking, and the optimization processes. As for the analysis of ASY-NHDA, we equivalently transform the asynchronous system into an augmented synchronous system without delays and prove its convergence through using the generalized small gain theorem. The results show that NHDA and ASY-NHDA converge to the optimal solution at a linear convergence as long as the largest step size is positive and less than an explicitly estimated upper bound, and the largest momentum parameter is nonnegative and less than an upper bound. Finally, we demonstrate the advantages of ASY-NHDA through simulations.
在本文中,我们提出了一种新颖的Nesterov梯度与重球双加速分布式同步优化算法,称为NHDA,并采用通用异步模型进一步提出一种有效的异步算法,称为ASY-NHDA,用于解决有向图上的分布式优化问题,其中每个智能体都可访问一个局部目标函数,并仅通过与其直接邻居通信来计算最优解。我们的目标是最小化所有满足强凸性和Lipschitz连续性的局部目标函数之和。考虑一种通用异步模型,其中智能体与其直接邻居通信并独立开始新的计算,即智能体可以在任何时候与其邻居通信而无需任何协调,并使用来自其入邻居的延迟信息来计算新的更新。延迟是任意的、不可预测的且时变的,但有界。NHDA的理论分析基于分析共识、梯度跟踪和优化过程之间的相互作用。至于ASY-NHDA的分析,我们将异步系统等效地转换为一个无延迟的增强同步系统,并通过使用广义小增益定理证明其收敛性。结果表明,只要最大步长为正且小于一个明确估计的上界,并且最大动量参数为非负且小于一个上界,NHDA和ASY-NHDA就会以线性收敛速度收敛到最优解。最后,我们通过仿真展示了ASY-NHDA的优势。