Wu Mou, Xiong Naixue, Vasilakos Athanasios V, Leung Victor C M, Chen C L Philip
IEEE Trans Cybern. 2022 May;52(5):4012-4026. doi: 10.1109/TCYB.2020.3011819. Epub 2022 May 19.
With the rise of the processing power of networked agents in the last decade, second-order methods for machine learning have received increasing attention. To solve the distributed optimization problems over multiagent systems, Newton's method has the benefits of fast convergence and high estimation accuracy. In this article, we propose a reinforced network Newton method with K -order control flexibility (RNN-K) in a distributed manner by integrating the consensus strategy and the latest knowledge across the network into local descent direction. The key component of our method is to make the best of intermediate results from the local neighborhood to learn global knowledge, not just for the consensus effect like most existing works, including the gradient descent and Newton methods as well as their refinements. Such a reinforcement enables revitalizing the traditional iterative consensus strategy to accelerate the descent of the Newton direction. The biggest difficulty to design the approximated Newton descent in distributed settings is addressed by using a special Taylor expansion that follows the matrix splitting technique. Based on the truncation on the Taylor series, our method also presents a tradeoff effect between estimation accuracy and computation/communication cost, which provides the control flexibility as a practical consideration. We derive theoretically the sufficient conditions for the convergence of the proposed RNN-K method of at least a linear rate. The simulation results illustrate the performance effectiveness by being applied to three types of distributed optimization problems that arise frequently in machine-learning scenarios.
随着过去十年网络智能体处理能力的提升,机器学习的二阶方法受到了越来越多的关注。为了解决多智能体系统上的分布式优化问题,牛顿法具有收敛速度快和估计精度高的优点。在本文中,我们通过将共识策略和网络中的最新知识整合到局部下降方向,以分布式方式提出了一种具有K阶控制灵活性的强化网络牛顿法(RNN-K)。我们方法的关键组成部分是充分利用局部邻域的中间结果来学习全局知识,而不仅仅是像大多数现有工作那样为了达成共识效果,包括梯度下降法、牛顿法及其改进方法。这种强化使得传统的迭代共识策略得以重振,从而加速牛顿方向的下降。通过使用遵循矩阵分裂技术的特殊泰勒展开式,解决了在分布式环境中设计近似牛顿下降的最大困难。基于对泰勒级数的截断,我们的方法还在估计精度与计算/通信成本之间呈现出一种权衡效应,这作为实际考虑提供了控制灵活性。我们从理论上推导了所提出的RNN-K方法至少以线性速率收敛的充分条件。仿真结果通过应用于机器学习场景中经常出现的三种类型的分布式优化问题,说明了该方法的性能有效性。