Jakovetić Dušan, Krejić Nataša, Malaspina Greta
Department of Mathematics and Informatics, Faculty of Sciences, University of Novi Sad, Trg Dositeja Obradovića 4, Novi Sad, 21000 Serbia.
Dipartimento di Ingegneria Industriale, Università degli studi di Firenze, Viale G.B. Morgagni 40, 50134 Florence, Italy.
Comput Optim Appl. 2025;91(2):683-715. doi: 10.1007/s10589-025-00666-z. Epub 2025 Feb 26.
We consider two formulations for distributed optimization wherein nodes in a generic connected network solve a problem of common interest: distributed personalized optimization and consensus optimization. A new method termed DINAS (Distributed Inexact Newton method with Adaptive step size) is proposed. DINAS employs large adaptively computed step sizes, requires a reduced global parameters knowledge with respect to existing alternatives, and can operate without any local Hessian inverse calculations nor Hessian communications. When solving personalized distributed learning formulations, DINAS achieves quadratic convergence with respect to computational cost and linear convergence with respect to communication cost, the latter rate being independent of the local functions condition numbers or of the network topology. When solving consensus optimization problems, DINAS is shown to converge to the global solution. Extensive numerical experiments demonstrate significant improvements of DINAS over existing alternatives. As a result of independent interest, we provide for the first time convergence analysis of the Newton method with the adaptive Polyak's step size when the Newton direction is computed inexactly in centralized environment.
我们考虑两种分布式优化公式,其中一般连通网络中的节点解决一个共同感兴趣的问题:分布式个性化优化和共识优化。提出了一种称为DINAS(具有自适应步长的分布式不精确牛顿法)的新方法。DINAS采用自适应计算的大步长,相对于现有方法需要更少的全局参数知识,并且可以在不进行任何局部海森矩阵求逆计算或海森矩阵通信的情况下运行。在解决个性化分布式学习公式时,DINAS在计算成本方面实现二次收敛,在通信成本方面实现线性收敛,后者的速率与局部函数条件数或网络拓扑无关。在解决共识优化问题时,DINAS被证明收敛到全局解。广泛的数值实验表明,DINAS相对于现有方法有显著改进。出于独立兴趣,我们首次给出了在集中式环境中不精确计算牛顿方向时,具有自适应波利亚克步长的牛顿法的收敛性分析。