Polycarpou M M, Ioannou P A
Dept. of Electr. Eng.-Syst., Univ. of Southern California, Los Angeles, CA.
IEEE Trans Neural Netw. 1992;3(1):39-50. doi: 10.1109/72.105416.
A class of feedforward neural networks, structured networks, has recently been introduced as a method for solving matrix algebra problems in an inherently parallel formulation. A convergence analysis for the training of structured networks is presented. Since the learning techniques used in structured networks are also employed in the training of neural networks, the issue of convergence is discussed not only from a numerical algebra perspective but also as a means of deriving insight into connectionist learning. Bounds on the learning rate are developed under which exponential convergence of the weights to their correct values is proved for a class of matrix algebra problems that includes linear equation solving, matrix inversion, and Lyapunov equation solving. For a special class of problems, the orthogonalized back-propagation algorithm, an optimal recursive update law for minimizing a least-squares cost functional, is introduced. It guarantees exact convergence in one epoch. Several learning issues are investigated.
一类前馈神经网络,即结构化网络,最近被引入作为一种以固有并行形式解决矩阵代数问题的方法。本文给出了结构化网络训练的收敛性分析。由于结构化网络中使用的学习技术也用于神经网络的训练,因此不仅从数值代数的角度讨论了收敛问题,还将其作为深入了解联结主义学习的一种手段。针对包括线性方程求解、矩阵求逆和李雅普诺夫方程求解在内的一类矩阵代数问题,推导了学习率的界,在此界下证明了权重以指数形式收敛到其正确值。对于一类特殊问题,引入了正交反向传播算法,这是一种用于最小化最小二乘代价函数的最优递归更新法则。它保证在一个训练周期内精确收敛。研究了几个学习问题。