Ng Sin-Chun, Cheung Chi-Chung, Leung Shu-Hung
School of Science and Technology, The Open University of Hong Kong, Hong Kong, China.
IEEE Trans Neural Netw. 2004 Nov;15(6):1411-23. doi: 10.1109/TNN.2004.836237.
This paper presents two novel approaches, backpropagation (BP) with magnified gradient function (MGFPROP) and deterministic weight modification (DWM), to speed up the convergence rate and improve the global convergence capability of the standard BP learning algorithm. The purpose of MGFPROP is to increase the convergence rate by magnifying the gradient function of the activation function, while the main objective of DWM is to reduce the system error by changing the weights of a multilayered feedforward neural network in a deterministic way. Simulation results show that the performance of the above two approaches is better than BP and other modified BP algorithms for a number of learning problems. Moreover, the integration of the above two approaches forming a new algorithm called MDPROP, can further improve the performance of MGFPROP and DWM. From our simulation results, the MDPROP algorithm always outperforms BP and other modified BP algorithms in terms of convergence rate and global convergence capability.
本文提出了两种新颖的方法,即具有放大梯度函数的反向传播(BP)算法(MGFPROP)和确定性权重修正(DWM),以加快收敛速度并提高标准BP学习算法的全局收敛能力。MGFPROP的目的是通过放大激活函数的梯度函数来提高收敛速度,而DWM的主要目标是以确定性的方式改变多层前馈神经网络的权重来减少系统误差。仿真结果表明,对于许多学习问题,上述两种方法的性能优于BP算法和其他改进的BP算法。此外,将上述两种方法集成形成一种名为MDPROP的新算法,可以进一步提高MGFPROP和DWM的性能。从我们的仿真结果来看,MDPROP算法在收敛速度和全局收敛能力方面始终优于BP算法和其他改进的BP算法。