Bitterlich Sandy, Boţ Radu Ioan, Csetnek Ernö Robert, Wanka Gert
1Faculty of Mathematics, Chemnitz University of Technology, 09126 Chemnitz, Germany.
2University of Vienna, Oskar-Morgenstern-Platz 1, 1090 Vienna, Austria.
J Optim Theory Appl. 2019;182(1):110-132. doi: 10.1007/s10957-018-01454-y. Epub 2018 Dec 24.
The Alternating Minimization Algorithm has been proposed by Paul Tseng to solve convex programming problems with two-block separable linear constraints and objectives, whereby (at least) one of the components of the latter is assumed to be strongly convex. The fact that one of the subproblems to be solved within the iteration process of this method does not usually correspond to the calculation of a proximal operator through a closed formula affects the implementability of the algorithm. In this paper, we allow in each block of the objective a further smooth convex function and propose a proximal version of the algorithm, which is achieved by equipping the algorithm with proximal terms induced by variable metrics. For suitable choices of the latter, the solving of the two subproblems in the iterative scheme can be reduced to the computation of proximal operators. We investigate the convergence of the proposed algorithm in a real Hilbert space setting and illustrate its numerical performances on two applications in image processing and machine learning.
交替最小化算法由保罗·曾提出,用于解决具有两块可分离线性约束和目标的凸规划问题,其中(至少)后者的一个组成部分被假定为强凸的。在该方法的迭代过程中要解决的子问题之一通常不对应于通过封闭公式计算近端算子,这一事实影响了算法的可实现性。在本文中,我们在目标的每个块中允许一个进一步的光滑凸函数,并提出了该算法的近端版本,这是通过为算法配备由可变度量诱导的近端项来实现的。对于后者的合适选择,迭代方案中两个子问题的求解可以简化为近端算子的计算。我们在实希尔伯特空间设置中研究了所提出算法的收敛性,并在图像处理和机器学习的两个应用中展示了其数值性能。