IEEE Trans Neural Netw Learn Syst. 2014 Mar;25(3):545-56. doi: 10.1109/TNNLS.2013.2278427.
A neural network based on smoothing approximation is presented for a class of nonsmooth, nonconvex constrained optimization problems, where the objective function is nonsmooth and nonconvex, the equality constraint functions are linear and the inequality constraint functions are nonsmooth, convex. This approach can find a Clarke stationary point of the optimization problem by following a continuous path defined by a solution of an ordinary differential equation. The global convergence is guaranteed if either the feasible set is bounded or the objective function is level bounded. Specially, the proposed network does not require: 1) the initial point to be feasible; 2) a prior penalty parameter to be chosen exactly; 3) a differential inclusion to be solved. Numerical experiments and comparisons with some existing algorithms are presented to illustrate the theoretical results and show the efficiency of the proposed network.
提出了一种基于平滑逼近的神经网络方法,用于求解一类非光滑非凸约束优化问题,其中目标函数是非光滑非凸的,等式约束函数是线性的,不等式约束函数是光滑凸的。这种方法可以通过跟随由常微分方程解定义的连续路径来找到优化问题的 Clarke 驻点。如果可行集是有界的或者目标函数是水平有界的,则保证全局收敛性。特别地,所提出的网络不需要:1)初始点是可行的;2)准确选择预先的罚参数;3)求解微分包含。数值实验和与一些现有算法的比较表明了所提出的网络的有效性和理论结果。