Abbass Hussein A
Artificial Life and Adaptive Robotics Lab, School of Information Technology and Electrical Engineering, University of New South Wales, Australian Defence Force Academy, Canberra, ACT 2600, Australia.
Neural Comput. 2003 Nov;15(11):2705-26. doi: 10.1162/089976603322385126.
The use of backpropagation for training artificial neural networks (ANNs) is usually associated with a long training process. The user needs to experiment with a number of network architectures; with larger networks, more computational cost in terms of training time is required. The objective of this letter is to present an optimization algorithm, comprising a multiobjective evolutionary algorithm and a gradient-based local search. In the rest of the letter, this is referred to as the memetic Pareto artificial neural network algorithm for training ANNs. The evolutionary approach is used to train the network and simultaneously optimize its architecture. The result is a set of networks, with each network in the set attempting to optimize both the training error and the architecture. We also present a self-adaptive version with lower computational cost. We show empirically that the proposed method is capable of reducing the training time compared to gradient-based techniques.
使用反向传播算法训练人工神经网络(ANN)通常需要很长的训练过程。用户需要对多种网络架构进行试验;对于更大的网络,在训练时间方面需要更多的计算成本。这封信的目的是提出一种优化算法,该算法包括多目标进化算法和基于梯度的局部搜索。在信的其余部分,这被称为用于训练ANN的混合帕累托人工神经网络算法。进化方法用于训练网络并同时优化其架构。结果是一组网络,该集合中的每个网络都试图同时优化训练误差和架构。我们还提出了一种计算成本更低的自适应版本。我们通过实验表明,与基于梯度的技术相比,所提出的方法能够减少训练时间。