Suppr超能文献

用于具有耦合约束的凸优化的带迭代平均反馈的分布式对偶次梯度算法

Distributed Dual Subgradient Algorithms With Iterate-Averaging Feedback for Convex Optimization With Coupled Constraints.

作者信息

Liang Shu, Wang Le Yi, Yin George

出版信息

IEEE Trans Cybern. 2021 May;51(5):2529-2539. doi: 10.1109/TCYB.2019.2933003. Epub 2021 Apr 15.

Abstract

This article considers a general model of distributed convex optimization with possibly local constraints, coupled equality constraints, and coupled inequality constraints, where the coupled equality constraints are affine and the coupled inequality constraints can be nonaffine. To solve this problem, we present two algorithms. The first algorithm is similar to a dual subgradient algorithm that requires a center node in the network. The main advantage of the first algorithm is that it achieves the optimal convergence rate O([1/√k]) . Moreover, it does not require additional treatment for the primal recovery. These merits are achieved by using an iterate-averaging feedback technique on the basis of the dual subgradient method. The second algorithm further removes the requirement of a center node by employing consensus tracking iterates. As a result, the second algorithm is fully distributed at the price of achieving an O([lnk/√k]) convergence rate.

摘要

本文考虑了一个具有可能的局部约束、耦合等式约束和耦合不等式约束的分布式凸优化通用模型,其中耦合等式约束是仿射的,耦合不等式约束可以是非仿射的。为了解决这个问题,我们提出了两种算法。第一种算法类似于需要网络中的中心节点的对偶次梯度算法。第一种算法的主要优点是它实现了最优收敛速率(O([1/\sqrt{k}]))。此外,它不需要对原始恢复进行额外处理。这些优点是通过在对偶次梯度方法的基础上使用迭代平均反馈技术实现的。第二种算法通过采用共识跟踪迭代进一步消除了对中心节点的要求。结果,第二种算法以达到(O([\ln k/\sqrt{k}]))收敛速率为代价实现了完全分布式。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验