Yuan Deming, Ho Daniel W C, Xu Shengyuan
IEEE Trans Neural Netw Learn Syst. 2021 Jun;32(6):2344-2357. doi: 10.1109/TNNLS.2020.3004723. Epub 2021 Jun 2.
This article considers the problem of stochastic strongly convex optimization over a network of multiple interacting nodes. The optimization is under a global inequality constraint and the restriction that nodes have only access to the stochastic gradients of their objective functions. We propose an efficient distributed non-primal-dual algorithm, by incorporating the inequality constraint into the objective via a smoothing technique. We show that the proposed algorithm achieves an optimal O((1)/(T)) ( T is the total number of iterations) convergence rate in the mean square distance from the optimal solution. In particular, we establish a high probability bound for the proposed algorithm, by showing that with a probability at least 1-δ , the proposed algorithm converges at a rate of O(ln(ln(T)/δ)/ T) . Finally, we provide numerical experiments to demonstrate the efficacy of the proposed algorithm.
本文考虑了多个相互作用节点网络上的随机强凸优化问题。该优化受全局不等式约束以及节点只能获取其目标函数的随机梯度这一限制。我们通过一种平滑技术将不等式约束纳入目标函数,提出了一种高效的分布式非原始对偶算法。我们证明,所提出的算法在与最优解的均方距离上实现了最优的(O(\frac{1}{T}))((T)为总迭代次数)收敛速率。特别地,我们通过证明所提出的算法以至少(1 - \delta)的概率以(O(\frac{\ln(\frac{\ln(T)}{\delta})}{T}))的速率收敛,为该算法建立了一个高概率界。最后,我们提供数值实验来证明所提出算法的有效性。