Geiersbach Caroline, Scarinci Teresa
Weierstrass Institute, Mohrenstrasse 39, 10117 Berlin, Germany.
Department of Information Engineering, Computer Science and Mathematics, University of L'Aquila, Via Vetoio - Loc. Coppito, 67010 L'Aquila, Italy.
Comput Optim Appl. 2021;78(3):705-740. doi: 10.1007/s10589-020-00259-y. Epub 2021 Jan 12.
For finite-dimensional problems, stochastic approximation methods have long been used to solve stochastic optimization problems. Their application to infinite-dimensional problems is less understood, particularly for nonconvex objectives. This paper presents convergence results for the stochastic proximal gradient method applied to Hilbert spaces, motivated by optimization problems with partial differential equation (PDE) constraints with random inputs and coefficients. We study stochastic algorithms for nonconvex and nonsmooth problems, where the nonsmooth part is convex and the nonconvex part is the expectation, which is assumed to have a Lipschitz continuous gradient. The optimization variable is an element of a Hilbert space. We show almost sure convergence of strong limit points of the random sequence generated by the algorithm to stationary points. We demonstrate the stochastic proximal gradient algorithm on a tracking-type functional with a -penalty term constrained by a semilinear PDE and box constraints, where input terms and coefficients are subject to uncertainty. We verify conditions for ensuring convergence of the algorithm and show a simulation.
对于有限维问题,随机逼近方法长期以来一直用于解决随机优化问题。它们在无限维问题中的应用则不太为人所理解,尤其是对于非凸目标。本文给出了应用于希尔伯特空间的随机近端梯度法的收敛结果,其动机源于具有随机输入和系数的偏微分方程(PDE)约束的优化问题。我们研究非凸和非光滑问题的随机算法,其中非光滑部分是凸的,非凸部分是期望,假设其具有利普希茨连续梯度。优化变量是希尔伯特空间的一个元素。我们证明了算法生成的随机序列的强极限点几乎必然收敛到驻点。我们在一个具有由半线性PDE和盒约束约束的(\ell_1)惩罚项的跟踪型泛函上演示了随机近端梯度算法,其中输入项和系数存在不确定性。我们验证了确保算法收敛的条件并展示了一个模拟。