Suppr超能文献

约束优化近端距离法的扩展

Extensions to the Proximal Distance Method of Constrained Optimization.

作者信息

Landeros Alfonso, Padilla Oscar Hernan Madrid, Zhou Hua, Lange Kenneth

机构信息

Department of Computational Medicine, University of California, Los Angeles CA 90095-1596, USA.

Department of Statistics, University of California, Los Angeles CA 90095-1554, USA.

出版信息

J Mach Learn Res. 2022;23.

Abstract

The current paper studies the problem of minimizing a loss () subject to constraints of the form ∈ , where is a closed set, convex or not, and is a matrix that fuses parameters. Fusion constraints can capture smoothness, sparsity, or more general constraint patterns. To tackle this generic class of problems, we combine the Beltrami-Courant penalty method of optimization with the proximal distance principle. The latter is driven by minimization of penalized objectives involving large tuning constants and the squared Euclidean distance of from . The next iterate of the corresponding proximal distance algorithm is constructed from the current iterate by minimizing the majorizing surrogate function . For fixed and a subanalytic loss () and a subanalytic constraint set , we prove convergence to a stationary point. Under stronger assumptions, we provide convergence rates and demonstrate linear local convergence. We also construct a steepest descent (SD) variant to avoid costly linear system solves. To benchmark our algorithms, we compare their results to those delivered by the alternating direction method of multipliers (ADMM). Our extensive numerical tests include problems on metric projection, convex regression, convex clustering, total variation image denoising, and projection of a matrix to a good condition number. These experiments demonstrate the superior speed and acceptable accuracy of our steepest variant on high-dimensional problems. Julia code to replicate all of our experiments can be found at https://github.com/alanderos91/ProximalDistanceAlgorithms.jl.

摘要

本文研究了在形如(x\in\mathcal{C})的约束条件下使损失函数(l(x))最小化的问题,其中(\mathcal{C})是一个闭集,不一定是凸集,且(A)是一个融合参数的矩阵。融合约束可以捕捉平滑性、稀疏性或更一般的约束模式。为了解决这类一般问题,我们将优化的Beltrami - Courant罚函数法与近端距离原理相结合。后者是由涉及大的调谐常数(\lambda)的罚目标函数(l(x)+\lambda\mathrm{dist}^2(x,\mathcal{C}))的最小化驱动的,其中(\mathrm{dist}(x,\mathcal{C}))是(x)到(\mathcal{C})的欧几里得距离的平方。相应近端距离算法的下一个迭代点(x^{k + 1})是通过最小化主元替代函数从当前迭代点(x^k)构造出来的。对于固定的(\lambda)、次解析损失函数(l(x))和次解析约束集(\mathcal{C}),我们证明了算法收敛到一个驻点。在更强的假设下,我们给出了收敛速率并证明了线性局部收敛性。我们还构造了一个最速下降(SD)变体以避免求解代价高昂的线性系统。为了对我们的算法进行基准测试,我们将其结果与乘子交替方向法(ADMM)的结果进行比较。我们广泛的数值测试包括度量投影、凸回归、凸聚类、全变差图像去噪以及将矩阵投影到良态数等问题。这些实验证明了我们的最速下降变体在高维问题上具有卓越的速度和可接受的精度。可在https://github.com/alanderos91/ProximalDistanceAlgorithms.jl找到用于复制我们所有实验的Julia代码。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0d79/10191389/998ce165fe84/nihms-1884244-f0006.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验