Wu Wei, Feng Guorui, Li Zhengxue, Xu Yuesheng
Applied Mathematics Department, Dalian University of Technology, Dalian 116023, China.
IEEE Trans Neural Netw. 2005 May;16(3):533-40. doi: 10.1109/TNN.2005.844903.
Online gradient methods are widely used for training feedforward neural networks. We prove in this paper a convergence theorem for an online gradient method with variable step size for backward propagation (BP) neural networks with a hidden layer. Unlike most of the convergence results that are of probabilistic and nonmonotone nature, the convergence result that we establish here has a deterministic and monotone nature.
在线梯度方法被广泛用于训练前馈神经网络。在本文中,我们证明了一种针对具有隐藏层的反向传播(BP)神经网络的变步长在线梯度方法的收敛定理。与大多数具有概率性和非单调性质的收敛结果不同,我们在此建立的收敛结果具有确定性和单调性质。