School of Mathematical Sciences, Dalian University of Technology, Dalian 116024, PR China.
Neural Netw. 2012 Sep;33:127-35. doi: 10.1016/j.neunet.2012.04.013. Epub 2012 May 9.
Weight decay method as one of classical complexity regularization methods is simple and appears to work well in some applications for backpropagation neural networks (BPNN). This paper shows results for the weak and strong convergence for cyclic and almost cyclic learning BPNN with penalty term (CBP-P and ACBP-P). The convergence is guaranteed under certain relaxed conditions for activation functions, learning rate and under the assumption for the stationary set of error function. Furthermore, the boundedness of the weights in the training procedure is obtained in a simple and clear way. Numerical simulations are implemented to support our theoretical results and demonstrate that ACBP-P has better performance than CBP-P on both convergence speed and generalization ability.
权值衰减方法是经典的复杂度正则化方法之一,在反向传播神经网络 (BPNN) 的一些应用中似乎效果很好。本文针对具有惩罚项的循环和几乎循环学习 BPNN (CBP-P 和 ACBP-P) 的弱和强收敛性给出了结果。在激活函数、学习率的某些放宽条件下,并在误差函数的稳定集假设下,保证了收敛性。此外,以简单明了的方式获得了训练过程中权重的有界性。数值模拟验证了我们的理论结果,并表明 ACBP-P 在收敛速度和泛化能力方面均优于 CBP-P。