Liu Yaohua, Wu Gang, Tian Zhi, Ling Qing
IEEE Trans Neural Netw Learn Syst. 2022 Aug;33(8):3290-3304. doi: 10.1109/TNNLS.2021.3051638. Epub 2022 Aug 3.
In distributed learning and optimization, a network of multiple computing units coordinates to solve a large-scale problem. This article focuses on dynamic optimization over a decentralized network. We develop a communication-efficient algorithm based on the alternating direction method of multipliers (ADMM) with quantized and censored communications, termed DQC-ADMM. At each time of the algorithm, the nodes collaborate to minimize the summation of their time-varying, local objective functions. Through local iterative computation and communication, DQC-ADMM is able to track the time-varying optimal solution. Different from traditional approaches requiring transmissions of the exact local iterates among the neighbors at every time, we propose to quantize the transmitted information, as well as adopt a communication-censoring strategy for the sake of reducing the communication cost in the optimization process. To be specific, a node transmits the quantized version of the local information to its neighbors, if and only if the value sufficiently deviates from the one previously transmitted. We theoretically justify that the proposed DQC-ADMM is capable of tracking the time-varying optimal solution, subject to a bounded error caused by the quantized and censored communications, as well as the system dynamics. Through numerical experiments, we evaluate the tracking performance and communication savings of the proposed DQC-ADMM.
在分布式学习与优化中,多个计算单元组成的网络协同解决大规模问题。本文聚焦于分散式网络上的动态优化。我们基于带量化和删减通信的乘子交替方向法(ADMM)开发了一种通信高效的算法,称为DQC - ADMM。在算法的每次迭代中,节点协作以最小化其随时间变化的局部目标函数之和。通过局部迭代计算和通信,DQC - ADMM能够跟踪随时间变化的最优解。与传统方法每次都需要在邻居之间传输精确的局部迭代值不同,我们提议对传输的信息进行量化,并采用通信删减策略以降低优化过程中的通信成本。具体而言,一个节点当且仅当其值与先前传输的值有足够大的偏差时,才将局部信息的量化版本传输给其邻居。我们从理论上证明,所提出的DQC - ADMM能够跟踪随时间变化的最优解,但会受到量化和删减通信以及系统动态特性所导致的有界误差的影响。通过数值实验,我们评估了所提出的DQC - ADMM的跟踪性能和通信节省情况。