Jiang Xia, Zeng Xianlin, Sun Jian, Chen Jie, Xie Lihua
IEEE Trans Neural Netw Learn Syst. 2024 Mar;35(3):4082-4096. doi: 10.1109/TNNLS.2022.3201711. Epub 2024 Feb 29.
The nonsmooth finite-sum minimization is a fundamental problem in machine learning. This article develops a distributed stochastic proximal-gradient algorithm with random reshuffling to solve the finite-sum minimization over time-varying multiagent networks. The objective function is a sum of differentiable convex functions and nonsmooth regularization. Each agent in the network updates local variables by local information exchange and cooperates to seek an optimal solution. We prove that local variable estimates generated by the proposed algorithm achieve consensus and are attracted to a neighborhood of the optimal solution with an O((1/T)+(1/√T)) convergence rate, where T is the total number of iterations. Finally, some comparative simulations are provided to verify the convergence performance of the proposed algorithm.
非光滑有限和最小化是机器学习中的一个基本问题。本文提出了一种带随机重排的分布式随机近端梯度算法,用于解决时变多智能体网络上的有限和最小化问题。目标函数是可微凸函数与非光滑正则化项的和。网络中的每个智能体通过局部信息交换来更新局部变量,并协同寻找最优解。我们证明,所提算法生成的局部变量估计值能达成共识,且以(O((1/T)+(1/\sqrt{T})))的收敛速率被吸引到最优解的邻域,其中(T)是迭代总次数。最后,通过一些对比仿真验证了所提算法的收敛性能。