Chongqing Key Laboratory of Nonlinear Circuits and Intelligent Information Processing, School of Electronic and Information Engineering, Southwest University, Chongqing 400715, China.
Department of Mathematics, Texas A&M University at Qatar, Doha 23874, Qatar.
Neural Netw. 2022 Sep;153:1-12. doi: 10.1016/j.neunet.2022.05.022. Epub 2022 Jun 2.
In this paper, several recurrent neural networks (RNNs) for solving the L-minimization problem are proposed. First, a one-layer RNN based on the hyperbolic tangent function and the projection matrix is designed. In addition, the stability and global convergence of the previously presented RNN are proved by the Lyapunov method. Then, the sliding mode control technique is introduced into the former RNN to design finite-time RNN (FTRNN). Under the condition that the projection matrix satisfies the Restricted Isometry Property (RIP), a suitable Lyapunov function is constructed to prove that the FTRNN is stable in the Lyapunov sense and has the finite-time convergence property. Finally, we make a comparison of the proposed RNN and FTRNN with the existing RNNs. To achieve this, we implement experiments for sparse signal reconstruction and image reconstruction. The results further demonstrate the effectiveness and superior performance of the proposed RNN and FTRNN.
本文提出了几种用于解决 L-最小化问题的递归神经网络 (RNN)。首先,设计了一种基于双曲正切函数和投影矩阵的单层 RNN。此外,通过 Lyapunov 方法证明了先前提出的 RNN 的稳定性和全局收敛性。然后,将滑模控制技术引入到前一个 RNN 中,设计有限时间 RNN (FTRNN)。在投影矩阵满足约束等距性质 (RIP) 的条件下,构造合适的 Lyapunov 函数证明 FTRNN 在 Lyapunov 意义下是稳定的,并且具有有限时间收敛性质。最后,将所提出的 RNN 和 FTRNN 与现有的 RNN 进行了比较。为此,我们进行了稀疏信号重建和图像重建的实验。结果进一步证明了所提出的 RNN 和 FTRNN 的有效性和优越性能。