Jordanov Ivan, Georgieva Antoniya
IEEE Trans Neural Netw. 2007 May;18(3):937-42. doi: 10.1109/TNN.2007.891633.
A novel hybrid global optimization (GO) algorithm applied for feedforward neural networks (NNs) supervised learning is investigated. The network weights are determined by minimizing the traditional mean square error function. The optimization technique, called LP(tau)NM, combines a novel global heuristic search based on LPtau low-discrepancy sequences of points, and a simplex local search. The proposed method is initially tested on multimodal mathematical functions and subsequently applied for training moderate size NNs for solving popular benchmark problems. Finally, the results are analyzed, discussed, and compared with such as from backpropagation (BP) (Levenberg-Marquardt) and differential evolution methods.
研究了一种应用于前馈神经网络(NN)监督学习的新型混合全局优化(GO)算法。通过最小化传统的均方误差函数来确定网络权重。这种称为LP(tau)NM的优化技术结合了一种基于LPtau低差异点序列的新型全局启发式搜索和一种单纯形局部搜索。所提出的方法首先在多峰数学函数上进行测试,随后应用于训练中等规模的神经网络以解决常见的基准问题。最后,对结果进行分析、讨论,并与反向传播(BP)(Levenberg-Marquardt)和差分进化方法等的结果进行比较。