Bingsheng He, Liu Han, Wang Zhaoran, Yuan Xiaoming
International Centre of Management Science and Engineering, and Department of Mathematics, Nanjing University, Nanjing, 200093, China. This author was supported by NSFC grant 91130007 and MOEC fund 20110091110004.
Department of Operations Research and Financial Engineering, Princeton University, Princeton, NJ 08544. This author was supported by NSF grant III-1116730.
SIAM J Optim. 2014 Jul;24(3):1011-1040. doi: 10.1137/13090849X.
In this paper, we focus on the application of the Peaceman-Rachford splitting method (PRSM) to a convex minimization model with linear constraints and a separable objective function. Compared to the Douglas-Rachford splitting method (DRSM), another splitting method from which the alternating direction method of multipliers originates, PRSM requires more restrictive assumptions to ensure its convergence, while it is always faster whenever it is convergent. We first illustrate that the reason for this difference is that the iterative sequence generated by DRSM is strictly contractive, while that generated by PRSM is only contractive with respect to the solution set of the model. With only the convexity assumption on the objective function of the model under consideration, the convergence of PRSM is not guaranteed. But for this case, we show that the first iterations of PRSM still enable us to find an approximate solution with an accuracy of (1/). A worst-case (1/) convergence rate of PRSM in the ergodic sense is thus established under mild assumptions. After that, we suggest attaching an underdetermined relaxation factor with PRSM to guarantee the strict contraction of its iterative sequence and thus propose a strictly contractive PRSM. A worst-case (1/) convergence rate of this strictly contractive PRSM in a nonergodic sense is established. We show the numerical efficiency of the strictly contractive PRSM by some applications in statistical learning and image processing.
在本文中,我们专注于将Peaceman-Rachford分裂法(PRSM)应用于具有线性约束和可分目标函数的凸最小化模型。与Douglas-Rachford分裂法(DRSM)相比,乘子交替方向法源自后者,PRSM需要更严格的假设来确保其收敛,而一旦收敛它总是更快。我们首先说明这种差异的原因在于DRSM生成的迭代序列是严格压缩的,而PRSM生成的迭代序列仅相对于模型的解集是压缩的。仅在所考虑模型的目标函数的凸性假设下,PRSM的收敛性无法保证。但对于这种情况,我们表明PRSM的前 次迭代仍然使我们能够找到精度为(1/)的近似解。因此,在温和假设下建立了PRSM在遍历意义上的最坏情况(1/)收敛速率。之后,我们建议给PRSM附加一个欠定松弛因子以保证其迭代序列的严格压缩性,从而提出一种严格压缩的PRSM。建立了这种严格压缩的PRSM在非遍历意义上的最坏情况(1/)收敛速率。我们通过在统计学习和图像处理中的一些应用展示了严格压缩的PRSM的数值效率。