Suppr超能文献

正则化主对偶子梯度法用于分布式约束优化。

Regularized Primal-Dual Subgradient Method for Distributed Constrained Optimization.

出版信息

IEEE Trans Cybern. 2016 Sep;46(9):2109-18. doi: 10.1109/TCYB.2015.2464255. Epub 2015 Aug 13.

Abstract

In this paper, we study the distributed constrained optimization problem where the objective function is the sum of local convex cost functions of distributed nodes in a network, subject to a global inequality constraint. To solve this problem, we propose a consensus-based distributed regularized primal-dual subgradient method. In contrast to the existing methods, most of which require projecting the estimates onto the constraint set at every iteration, only one projection at the last iteration is needed for our proposed method. We establish the convergence of the method by showing that it achieves an O ( K (-1/4) ) convergence rate for general distributed constrained optimization, where K is the iteration counter. Finally, a numerical example is provided to validate the convergence of the propose method.

摘要

在本文中,我们研究了分布式约束优化问题,其中目标函数是网络中分布式节点的局部凸代价函数之和,受全局不等式约束。为了解决这个问题,我们提出了一种基于一致性的分布式正则化对偶梯度下降法。与现有的大多数方法不同,这些方法在每次迭代时都需要将估计值投影到约束集上,而我们提出的方法只需要在最后一次迭代时进行一次投影。我们通过证明该方法对于一般分布式约束优化问题具有 O(K^(-1/4))的收敛速度来证明该方法的收敛性,其中 K 是迭代计数器。最后,通过一个数值例子验证了所提出方法的收敛性。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验