School of Computer Science and Engineering, Southeast University, 211189, Nanjing, PR China.
School of Mathematics, Southeast University, 211189, Nanjing, PR China.
Neural Netw. 2023 Aug;165:472-482. doi: 10.1016/j.neunet.2023.06.001. Epub 2023 Jun 9.
This paper considers the decentralized optimization problem, where agents in a network cooperate to minimize the sum of their local objective functions by communication and local computation. We propose a decentralized second-order communication-efficient algorithm called communication-censored and communication-compressed quadratically approximated alternating direction method of multipliers (ADMM), termed as CC-DQM, by combining event-triggered communication with compressed communication. In CC-DQM, agents are allowed to transmit the compressed message only when the current primal variables have changed greatly compared to its last estimate. Moreover, to relieve the computation cost, the update of Hessian is also scheduled by the trigger condition. Theoretical analysis shows that the proposed algorithm can still maintain an exact linear convergence, despite the existence of compression error and intermittent communication, if the local objective functions are strongly convex and smooth. Finally, numerical experiments demonstrate its satisfactory communication efficiency.
本文考虑分散优化问题,其中网络中的代理通过通信和本地计算合作,通过通信和本地计算来最小化他们各自局部目标函数的和。我们提出了一种称为通信屏蔽和通信压缩二阶近似交替方向乘子法(ADMM)的分散二阶通信高效算法,称为 CC-DQM,通过将事件触发通信与压缩通信相结合。在 CC-DQM 中,只有当当前的主变量与上次估计相比发生了很大变化时,代理才被允许传输压缩消息。此外,为了减轻计算成本,更新 Hessian 也由触发条件调度。理论分析表明,如果局部目标函数具有强凸性和光滑性,即使存在压缩误差和间歇性通信,所提出的算法仍能保持精确的线性收敛性。最后,数值实验证明了其令人满意的通信效率。