IEEE Trans Neural Netw Learn Syst. 2018 Mar;29(3):534-544. doi: 10.1109/TNNLS.2016.2635676. Epub 2016 Dec 22.
In this paper, based on calculus and penalty method, a one-layer recurrent neural network is proposed for solving constrained complex-variable convex optimization. It is proved that for any initial point from a given domain, the state of the proposed neural network reaches the feasible region in finite time and converges to an optimal solution of the constrained complex-variable convex optimization finally. In contrast to existing neural networks for complex-variable convex optimization, the proposed neural network has a lower model complexity and better convergence. Some numerical examples and application are presented to substantiate the effectiveness of the proposed neural network.
本文基于微积分和罚函数法,提出了一种用于求解约束复变量凸优化问题的单层递归神经网络。证明了对于给定域中的任意初始点,所提出的神经网络的状态将在有限时间内达到可行区域,并最终收敛到约束复变量凸优化的一个最优解。与现有的复变量凸优化神经网络相比,所提出的神经网络具有更低的模型复杂度和更好的收敛性。通过一些数值实例和应用,验证了所提出的神经网络的有效性。