Holm J E, Botha E C
Department of Electrical and Electronic Engineering, University of Pretoria, South Africa.
Network. 1999 Feb;10(1):1-13.
Optimization of perceptron neural network classifiers requires an optimization algorithm that is robust. In general, the best network is selected after a number of optimization trials. An effective optimization algorithm generates good weight-vector solutions in a few optimization trial runs owing to its inherent ability to escape local minima, where a less effective algorithm requires a larger number of trial runs. Repetitive training and testing is a tedious process, so that an effective algorithm is desirable to reduce training time and increase the quality of the set of available weight-vector solutions. We present leap-frog as a robust optimization algorithm for training neural networks. In this paper the dynamic principles of leap-frog are described together with experiments to show the ability of leap-frog to generate reliable weight-vector solutions. Performance histograms are used to compare leap-frog with a variable-metric method, a conjugate-gradient method with modified restarts, and a constrained-momentum-based algorithm. Results indicate that leap-frog performs better in terms of classification error than the remaining three algorithms on two distinctly different test problems.